Test Report: KVM_Linux_crio 17206

                    
                      f478b3e95ad7f4002b1f24747b20ea33f6e08bc3:2023-11-28:32057
                    
                

Test fail (28/303)

Order failed test Duration
35 TestAddons/parallel/Ingress 163.61
48 TestAddons/StoppedEnableDisable 155.4
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 182.15
212 TestMultiNode/serial/PingHostFrom2Pods 3.16
218 TestMultiNode/serial/RestartKeepsNodes 780.49
220 TestMultiNode/serial/StopMultiNode 143.59
227 TestPreload 254.28
233 TestRunningBinaryUpgrade 172.68
265 TestNoKubernetes/serial/StartNoArgs 101
268 TestStoppedBinaryUpgrade/Upgrade 262.89
283 TestStartStop/group/old-k8s-version/serial/Stop 139.89
286 TestStartStop/group/embed-certs/serial/Stop 140.26
292 TestStartStop/group/no-preload/serial/Stop 139.53
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
296 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.72
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
304 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.06
305 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.13
306 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.03
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.28
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 461.9
309 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.54
310 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 309.78
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 182.88
317 TestStartStop/group/newest-cni/serial/Stop 140.45
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.41
x
+
TestAddons/parallel/Ingress (163.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-052905 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-052905 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-052905 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cab3e3aa-ecb4-4336-8075-199b469e8427] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cab3e3aa-ecb4-4336-8075-199b469e8427] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.030921721s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-052905 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.216522855s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-052905 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.221
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-052905 addons disable ingress-dns --alsologtostderr -v=1: (1.146508266s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-052905 addons disable ingress --alsologtostderr -v=1: (7.748750617s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-052905 -n addons-052905
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-052905 logs -n 25: (1.379550705s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC |                     |
	|         | -p download-only-480485                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC | 27 Nov 23 23:26 UTC |
	| delete  | -p download-only-480485                                                                     | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC | 27 Nov 23 23:26 UTC |
	| delete  | -p download-only-480485                                                                     | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC | 27 Nov 23 23:26 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-551564 | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC |                     |
	|         | binary-mirror-551564                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35587                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-551564                                                                     | binary-mirror-551564 | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC | 27 Nov 23 23:26 UTC |
	| addons  | disable dashboard -p                                                                        | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC |                     |
	|         | addons-052905                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC |                     |
	|         | addons-052905                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-052905 --wait=true                                                                | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC | 27 Nov 23 23:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	|         | -p addons-052905                                                                            |                      |         |         |                     |                     |
	| addons  | addons-052905 addons                                                                        | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	|         | addons-052905                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	|         | addons-052905                                                                               |                      |         |         |                     |                     |
	| ip      | addons-052905 ip                                                                            | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	| addons  | addons-052905 addons disable                                                                | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC | 27 Nov 23 23:30 UTC |
	|         | -p addons-052905                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-052905 ssh curl -s                                                                   | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:30 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-052905 addons                                                                        | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:31 UTC | 27 Nov 23 23:31 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-052905 ssh cat                                                                       | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:31 UTC | 27 Nov 23 23:31 UTC |
	|         | /opt/local-path-provisioner/pvc-b65845f2-c00c-42ed-bb18-1777f72877be_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-052905 addons disable                                                                | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:31 UTC | 27 Nov 23 23:31 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-052905 addons                                                                        | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:31 UTC | 27 Nov 23 23:31 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-052905 addons disable                                                                | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:31 UTC | 27 Nov 23 23:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-052905 ip                                                                            | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:33 UTC |
	| addons  | addons-052905 addons disable                                                                | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-052905 addons disable                                                                | addons-052905        | jenkins | v1.32.0 | 27 Nov 23 23:33 UTC | 27 Nov 23 23:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:26:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:26:51.430899   12542 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:26:51.431174   12542 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:26:51.431185   12542 out.go:309] Setting ErrFile to fd 2...
	I1127 23:26:51.431192   12542 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:26:51.431386   12542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1127 23:26:51.431975   12542 out.go:303] Setting JSON to false
	I1127 23:26:51.432791   12542 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":559,"bootTime":1701127053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:26:51.432854   12542 start.go:138] virtualization: kvm guest
	I1127 23:26:51.434856   12542 out.go:177] * [addons-052905] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:26:51.436420   12542 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:26:51.436427   12542 notify.go:220] Checking for updates...
	I1127 23:26:51.438073   12542 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:26:51.439518   12542 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:26:51.440690   12542 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:26:51.441915   12542 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:26:51.443267   12542 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:26:51.444786   12542 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:26:51.475113   12542 out.go:177] * Using the kvm2 driver based on user configuration
	I1127 23:26:51.476511   12542 start.go:298] selected driver: kvm2
	I1127 23:26:51.476523   12542 start.go:902] validating driver "kvm2" against <nil>
	I1127 23:26:51.476539   12542 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:26:51.477215   12542 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:26:51.477288   12542 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 23:26:51.490641   12542 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 23:26:51.490692   12542 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:26:51.490883   12542 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:26:51.490942   12542 cni.go:84] Creating CNI manager for ""
	I1127 23:26:51.490971   12542 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:26:51.490985   12542 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1127 23:26:51.490994   12542 start_flags.go:323] config:
	{Name:addons-052905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-052905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:26:51.491127   12542 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:26:51.492891   12542 out.go:177] * Starting control plane node addons-052905 in cluster addons-052905
	I1127 23:26:51.494261   12542 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:26:51.494294   12542 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:26:51.494306   12542 cache.go:56] Caching tarball of preloaded images
	I1127 23:26:51.494383   12542 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 23:26:51.494393   12542 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:26:51.494686   12542 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/config.json ...
	I1127 23:26:51.494706   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/config.json: {Name:mk6a1f876c905fde13dd25279412406dc9e7503c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:51.494830   12542 start.go:365] acquiring machines lock for addons-052905: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1127 23:26:51.494873   12542 start.go:369] acquired machines lock for "addons-052905" in 30.264µs
	I1127 23:26:51.494889   12542 start.go:93] Provisioning new machine with config: &{Name:addons-052905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-052905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:26:51.494993   12542 start.go:125] createHost starting for "" (driver="kvm2")
	I1127 23:26:51.496691   12542 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1127 23:26:51.496823   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:26:51.496862   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:26:51.509994   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I1127 23:26:51.510369   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:26:51.510862   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:26:51.510881   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:26:51.511209   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:26:51.511352   12542 main.go:141] libmachine: (addons-052905) Calling .GetMachineName
	I1127 23:26:51.511480   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:26:51.511654   12542 start.go:159] libmachine.API.Create for "addons-052905" (driver="kvm2")
	I1127 23:26:51.511683   12542 client.go:168] LocalClient.Create starting
	I1127 23:26:51.511714   12542 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem
	I1127 23:26:51.570945   12542 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem
	I1127 23:26:51.686383   12542 main.go:141] libmachine: Running pre-create checks...
	I1127 23:26:51.686406   12542 main.go:141] libmachine: (addons-052905) Calling .PreCreateCheck
	I1127 23:26:51.686904   12542 main.go:141] libmachine: (addons-052905) Calling .GetConfigRaw
	I1127 23:26:51.687363   12542 main.go:141] libmachine: Creating machine...
	I1127 23:26:51.687380   12542 main.go:141] libmachine: (addons-052905) Calling .Create
	I1127 23:26:51.687521   12542 main.go:141] libmachine: (addons-052905) Creating KVM machine...
	I1127 23:26:51.688643   12542 main.go:141] libmachine: (addons-052905) DBG | found existing default KVM network
	I1127 23:26:51.689357   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:51.689204   12564 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I1127 23:26:51.694556   12542 main.go:141] libmachine: (addons-052905) DBG | trying to create private KVM network mk-addons-052905 192.168.39.0/24...
	I1127 23:26:51.759652   12542 main.go:141] libmachine: (addons-052905) DBG | private KVM network mk-addons-052905 192.168.39.0/24 created
	I1127 23:26:51.759681   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:51.759587   12564 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:26:51.759696   12542 main.go:141] libmachine: (addons-052905) Setting up store path in /home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905 ...
	I1127 23:26:51.759718   12542 main.go:141] libmachine: (addons-052905) Building disk image from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso
	I1127 23:26:51.759734   12542 main.go:141] libmachine: (addons-052905) Downloading /home/jenkins/minikube-integration/17206-4749/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso...
	I1127 23:26:51.973638   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:51.973509   12564 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa...
	I1127 23:26:52.093386   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:52.093277   12564 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/addons-052905.rawdisk...
	I1127 23:26:52.093412   12542 main.go:141] libmachine: (addons-052905) DBG | Writing magic tar header
	I1127 23:26:52.093427   12542 main.go:141] libmachine: (addons-052905) DBG | Writing SSH key tar header
	I1127 23:26:52.093449   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:52.093382   12564 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905 ...
	I1127 23:26:52.093532   12542 main.go:141] libmachine: (addons-052905) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905
	I1127 23:26:52.093560   12542 main.go:141] libmachine: (addons-052905) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines
	I1127 23:26:52.093570   12542 main.go:141] libmachine: (addons-052905) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905 (perms=drwx------)
	I1127 23:26:52.093581   12542 main.go:141] libmachine: (addons-052905) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines (perms=drwxr-xr-x)
	I1127 23:26:52.093588   12542 main.go:141] libmachine: (addons-052905) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube (perms=drwxr-xr-x)
	I1127 23:26:52.093600   12542 main.go:141] libmachine: (addons-052905) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749 (perms=drwxrwxr-x)
	I1127 23:26:52.093606   12542 main.go:141] libmachine: (addons-052905) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1127 23:26:52.093615   12542 main.go:141] libmachine: (addons-052905) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1127 23:26:52.093622   12542 main.go:141] libmachine: (addons-052905) Creating domain...
	I1127 23:26:52.093670   12542 main.go:141] libmachine: (addons-052905) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:26:52.093705   12542 main.go:141] libmachine: (addons-052905) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749
	I1127 23:26:52.093723   12542 main.go:141] libmachine: (addons-052905) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1127 23:26:52.093733   12542 main.go:141] libmachine: (addons-052905) DBG | Checking permissions on dir: /home/jenkins
	I1127 23:26:52.093744   12542 main.go:141] libmachine: (addons-052905) DBG | Checking permissions on dir: /home
	I1127 23:26:52.093754   12542 main.go:141] libmachine: (addons-052905) DBG | Skipping /home - not owner
	I1127 23:26:52.094695   12542 main.go:141] libmachine: (addons-052905) define libvirt domain using xml: 
	I1127 23:26:52.094715   12542 main.go:141] libmachine: (addons-052905) <domain type='kvm'>
	I1127 23:26:52.094725   12542 main.go:141] libmachine: (addons-052905)   <name>addons-052905</name>
	I1127 23:26:52.094743   12542 main.go:141] libmachine: (addons-052905)   <memory unit='MiB'>4000</memory>
	I1127 23:26:52.094757   12542 main.go:141] libmachine: (addons-052905)   <vcpu>2</vcpu>
	I1127 23:26:52.094762   12542 main.go:141] libmachine: (addons-052905)   <features>
	I1127 23:26:52.094768   12542 main.go:141] libmachine: (addons-052905)     <acpi/>
	I1127 23:26:52.094774   12542 main.go:141] libmachine: (addons-052905)     <apic/>
	I1127 23:26:52.094780   12542 main.go:141] libmachine: (addons-052905)     <pae/>
	I1127 23:26:52.094787   12542 main.go:141] libmachine: (addons-052905)     
	I1127 23:26:52.094814   12542 main.go:141] libmachine: (addons-052905)   </features>
	I1127 23:26:52.094833   12542 main.go:141] libmachine: (addons-052905)   <cpu mode='host-passthrough'>
	I1127 23:26:52.094846   12542 main.go:141] libmachine: (addons-052905)   
	I1127 23:26:52.094856   12542 main.go:141] libmachine: (addons-052905)   </cpu>
	I1127 23:26:52.094863   12542 main.go:141] libmachine: (addons-052905)   <os>
	I1127 23:26:52.094871   12542 main.go:141] libmachine: (addons-052905)     <type>hvm</type>
	I1127 23:26:52.094877   12542 main.go:141] libmachine: (addons-052905)     <boot dev='cdrom'/>
	I1127 23:26:52.094884   12542 main.go:141] libmachine: (addons-052905)     <boot dev='hd'/>
	I1127 23:26:52.094890   12542 main.go:141] libmachine: (addons-052905)     <bootmenu enable='no'/>
	I1127 23:26:52.094895   12542 main.go:141] libmachine: (addons-052905)   </os>
	I1127 23:26:52.094907   12542 main.go:141] libmachine: (addons-052905)   <devices>
	I1127 23:26:52.094925   12542 main.go:141] libmachine: (addons-052905)     <disk type='file' device='cdrom'>
	I1127 23:26:52.094949   12542 main.go:141] libmachine: (addons-052905)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/boot2docker.iso'/>
	I1127 23:26:52.094963   12542 main.go:141] libmachine: (addons-052905)       <target dev='hdc' bus='scsi'/>
	I1127 23:26:52.094971   12542 main.go:141] libmachine: (addons-052905)       <readonly/>
	I1127 23:26:52.094977   12542 main.go:141] libmachine: (addons-052905)     </disk>
	I1127 23:26:52.094987   12542 main.go:141] libmachine: (addons-052905)     <disk type='file' device='disk'>
	I1127 23:26:52.095000   12542 main.go:141] libmachine: (addons-052905)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1127 23:26:52.095016   12542 main.go:141] libmachine: (addons-052905)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/addons-052905.rawdisk'/>
	I1127 23:26:52.095028   12542 main.go:141] libmachine: (addons-052905)       <target dev='hda' bus='virtio'/>
	I1127 23:26:52.095041   12542 main.go:141] libmachine: (addons-052905)     </disk>
	I1127 23:26:52.095053   12542 main.go:141] libmachine: (addons-052905)     <interface type='network'>
	I1127 23:26:52.095064   12542 main.go:141] libmachine: (addons-052905)       <source network='mk-addons-052905'/>
	I1127 23:26:52.095075   12542 main.go:141] libmachine: (addons-052905)       <model type='virtio'/>
	I1127 23:26:52.095085   12542 main.go:141] libmachine: (addons-052905)     </interface>
	I1127 23:26:52.095092   12542 main.go:141] libmachine: (addons-052905)     <interface type='network'>
	I1127 23:26:52.095106   12542 main.go:141] libmachine: (addons-052905)       <source network='default'/>
	I1127 23:26:52.095120   12542 main.go:141] libmachine: (addons-052905)       <model type='virtio'/>
	I1127 23:26:52.095131   12542 main.go:141] libmachine: (addons-052905)     </interface>
	I1127 23:26:52.095146   12542 main.go:141] libmachine: (addons-052905)     <serial type='pty'>
	I1127 23:26:52.095158   12542 main.go:141] libmachine: (addons-052905)       <target port='0'/>
	I1127 23:26:52.095168   12542 main.go:141] libmachine: (addons-052905)     </serial>
	I1127 23:26:52.095183   12542 main.go:141] libmachine: (addons-052905)     <console type='pty'>
	I1127 23:26:52.095198   12542 main.go:141] libmachine: (addons-052905)       <target type='serial' port='0'/>
	I1127 23:26:52.095209   12542 main.go:141] libmachine: (addons-052905)     </console>
	I1127 23:26:52.095223   12542 main.go:141] libmachine: (addons-052905)     <rng model='virtio'>
	I1127 23:26:52.095236   12542 main.go:141] libmachine: (addons-052905)       <backend model='random'>/dev/random</backend>
	I1127 23:26:52.095249   12542 main.go:141] libmachine: (addons-052905)     </rng>
	I1127 23:26:52.095259   12542 main.go:141] libmachine: (addons-052905)     
	I1127 23:26:52.095266   12542 main.go:141] libmachine: (addons-052905)     
	I1127 23:26:52.095274   12542 main.go:141] libmachine: (addons-052905)   </devices>
	I1127 23:26:52.095298   12542 main.go:141] libmachine: (addons-052905) </domain>
	I1127 23:26:52.095318   12542 main.go:141] libmachine: (addons-052905) 
	I1127 23:26:52.101420   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:45:ea:05 in network default
	I1127 23:26:52.102382   12542 main.go:141] libmachine: (addons-052905) Ensuring networks are active...
	I1127 23:26:52.102406   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:52.103022   12542 main.go:141] libmachine: (addons-052905) Ensuring network default is active
	I1127 23:26:52.103357   12542 main.go:141] libmachine: (addons-052905) Ensuring network mk-addons-052905 is active
	I1127 23:26:52.105052   12542 main.go:141] libmachine: (addons-052905) Getting domain xml...
	I1127 23:26:52.105777   12542 main.go:141] libmachine: (addons-052905) Creating domain...
	I1127 23:26:53.493667   12542 main.go:141] libmachine: (addons-052905) Waiting to get IP...
	I1127 23:26:53.494505   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:53.494879   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:53.494908   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:53.494852   12564 retry.go:31] will retry after 277.882738ms: waiting for machine to come up
	I1127 23:26:53.774329   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:53.774780   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:53.774809   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:53.774735   12564 retry.go:31] will retry after 279.500808ms: waiting for machine to come up
	I1127 23:26:54.056115   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:54.056467   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:54.056524   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:54.056430   12564 retry.go:31] will retry after 459.861943ms: waiting for machine to come up
	I1127 23:26:54.518100   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:54.518549   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:54.518581   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:54.518493   12564 retry.go:31] will retry after 532.088565ms: waiting for machine to come up
	I1127 23:26:55.052069   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:55.052476   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:55.052521   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:55.052462   12564 retry.go:31] will retry after 595.00676ms: waiting for machine to come up
	I1127 23:26:55.649220   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:55.649679   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:55.649717   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:55.649616   12564 retry.go:31] will retry after 694.281088ms: waiting for machine to come up
	I1127 23:26:56.345322   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:56.345699   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:56.345722   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:56.345656   12564 retry.go:31] will retry after 1.116232569s: waiting for machine to come up
	I1127 23:26:57.463365   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:57.463801   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:57.463825   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:57.463756   12564 retry.go:31] will retry after 898.786685ms: waiting for machine to come up
	I1127 23:26:58.363917   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:58.364347   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:58.364368   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:58.364307   12564 retry.go:31] will retry after 1.180843701s: waiting for machine to come up
	I1127 23:26:59.546921   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:26:59.547372   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:26:59.547399   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:26:59.547332   12564 retry.go:31] will retry after 1.926272246s: waiting for machine to come up
	I1127 23:27:01.476261   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:01.476698   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:27:01.476726   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:27:01.476650   12564 retry.go:31] will retry after 2.408708988s: waiting for machine to come up
	I1127 23:27:03.889863   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:03.890330   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:27:03.890354   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:27:03.890284   12564 retry.go:31] will retry after 3.328930965s: waiting for machine to come up
	I1127 23:27:07.221471   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:07.221829   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:27:07.221857   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:27:07.221812   12564 retry.go:31] will retry after 4.489627769s: waiting for machine to come up
	I1127 23:27:11.713060   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:11.713382   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find current IP address of domain addons-052905 in network mk-addons-052905
	I1127 23:27:11.713416   12542 main.go:141] libmachine: (addons-052905) DBG | I1127 23:27:11.713324   12564 retry.go:31] will retry after 4.978008624s: waiting for machine to come up
	I1127 23:27:16.693152   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:16.693674   12542 main.go:141] libmachine: (addons-052905) Found IP for machine: 192.168.39.221
	I1127 23:27:16.693698   12542 main.go:141] libmachine: (addons-052905) Reserving static IP address...
	I1127 23:27:16.693727   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has current primary IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:16.694010   12542 main.go:141] libmachine: (addons-052905) DBG | unable to find host DHCP lease matching {name: "addons-052905", mac: "52:54:00:ec:9e:b0", ip: "192.168.39.221"} in network mk-addons-052905
	I1127 23:27:16.767011   12542 main.go:141] libmachine: (addons-052905) DBG | Getting to WaitForSSH function...
	I1127 23:27:16.767053   12542 main.go:141] libmachine: (addons-052905) Reserved static IP address: 192.168.39.221
	I1127 23:27:16.767076   12542 main.go:141] libmachine: (addons-052905) Waiting for SSH to be available...
	I1127 23:27:16.769459   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:16.769935   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:16.769972   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:16.770050   12542 main.go:141] libmachine: (addons-052905) DBG | Using SSH client type: external
	I1127 23:27:16.770071   12542 main.go:141] libmachine: (addons-052905) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa (-rw-------)
	I1127 23:27:16.770103   12542 main.go:141] libmachine: (addons-052905) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1127 23:27:16.770126   12542 main.go:141] libmachine: (addons-052905) DBG | About to run SSH command:
	I1127 23:27:16.770141   12542 main.go:141] libmachine: (addons-052905) DBG | exit 0
	I1127 23:27:16.872794   12542 main.go:141] libmachine: (addons-052905) DBG | SSH cmd err, output: <nil>: 
	I1127 23:27:16.873079   12542 main.go:141] libmachine: (addons-052905) KVM machine creation complete!
	I1127 23:27:16.873440   12542 main.go:141] libmachine: (addons-052905) Calling .GetConfigRaw
	I1127 23:27:16.873981   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:16.874156   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:16.874285   12542 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1127 23:27:16.874314   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:16.875479   12542 main.go:141] libmachine: Detecting operating system of created instance...
	I1127 23:27:16.875503   12542 main.go:141] libmachine: Waiting for SSH to be available...
	I1127 23:27:16.875513   12542 main.go:141] libmachine: Getting to WaitForSSH function...
	I1127 23:27:16.875524   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:16.877748   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:16.878121   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:16.878165   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:16.878309   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:16.878495   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:16.878631   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:16.878769   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:16.878952   12542 main.go:141] libmachine: Using SSH client type: native
	I1127 23:27:16.879311   12542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1127 23:27:16.879326   12542 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1127 23:27:17.008080   12542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:27:17.008120   12542 main.go:141] libmachine: Detecting the provisioner...
	I1127 23:27:17.008129   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:17.011020   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.011416   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:17.011449   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.011609   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:17.011799   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.011954   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.012073   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:17.012201   12542 main.go:141] libmachine: Using SSH client type: native
	I1127 23:27:17.012551   12542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1127 23:27:17.012567   12542 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1127 23:27:17.142014   12542 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g8be4f20-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1127 23:27:17.142100   12542 main.go:141] libmachine: found compatible host: buildroot
	I1127 23:27:17.142115   12542 main.go:141] libmachine: Provisioning with buildroot...
	I1127 23:27:17.142128   12542 main.go:141] libmachine: (addons-052905) Calling .GetMachineName
	I1127 23:27:17.142374   12542 buildroot.go:166] provisioning hostname "addons-052905"
	I1127 23:27:17.142416   12542 main.go:141] libmachine: (addons-052905) Calling .GetMachineName
	I1127 23:27:17.142616   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:17.145442   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.145788   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:17.145807   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.145976   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:17.146149   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.146307   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.146471   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:17.146641   12542 main.go:141] libmachine: Using SSH client type: native
	I1127 23:27:17.147133   12542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1127 23:27:17.147156   12542 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-052905 && echo "addons-052905" | sudo tee /etc/hostname
	I1127 23:27:17.290392   12542 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-052905
	
	I1127 23:27:17.290425   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:17.292791   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.293212   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:17.293240   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.293467   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:17.293679   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.293861   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.294052   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:17.294215   12542 main.go:141] libmachine: Using SSH client type: native
	I1127 23:27:17.294558   12542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1127 23:27:17.294582   12542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-052905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-052905/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-052905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:27:17.433379   12542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:27:17.433418   12542 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1127 23:27:17.433442   12542 buildroot.go:174] setting up certificates
	I1127 23:27:17.433457   12542 provision.go:83] configureAuth start
	I1127 23:27:17.433475   12542 main.go:141] libmachine: (addons-052905) Calling .GetMachineName
	I1127 23:27:17.433800   12542 main.go:141] libmachine: (addons-052905) Calling .GetIP
	I1127 23:27:17.436330   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.436630   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:17.436656   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.436830   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:17.438933   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.439213   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:17.439233   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.439351   12542 provision.go:138] copyHostCerts
	I1127 23:27:17.439415   12542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1127 23:27:17.439535   12542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1127 23:27:17.439603   12542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1127 23:27:17.439678   12542 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.addons-052905 san=[192.168.39.221 192.168.39.221 localhost 127.0.0.1 minikube addons-052905]
	I1127 23:27:17.509780   12542 provision.go:172] copyRemoteCerts
	I1127 23:27:17.509846   12542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:27:17.509867   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:17.512661   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.512941   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:17.512973   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.513099   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:17.513306   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.513468   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:17.513609   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:17.611138   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:27:17.635380   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1127 23:27:17.659662   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1127 23:27:17.682602   12542 provision.go:86] duration metric: configureAuth took 249.130684ms
	I1127 23:27:17.682631   12542 buildroot.go:189] setting minikube options for container-runtime
	I1127 23:27:17.682829   12542 config.go:182] Loaded profile config "addons-052905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:27:17.682912   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:17.685310   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.685641   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:17.685672   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:17.685871   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:17.686068   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.686230   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:17.686384   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:17.686584   12542 main.go:141] libmachine: Using SSH client type: native
	I1127 23:27:17.686932   12542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1127 23:27:17.686951   12542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:27:18.018752   12542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:27:18.018782   12542 main.go:141] libmachine: Checking connection to Docker...
	I1127 23:27:18.018822   12542 main.go:141] libmachine: (addons-052905) Calling .GetURL
	I1127 23:27:18.020087   12542 main.go:141] libmachine: (addons-052905) DBG | Using libvirt version 6000000
	I1127 23:27:18.022326   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.022749   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:18.022785   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.022913   12542 main.go:141] libmachine: Docker is up and running!
	I1127 23:27:18.022928   12542 main.go:141] libmachine: Reticulating splines...
	I1127 23:27:18.022946   12542 client.go:171] LocalClient.Create took 26.51124474s
	I1127 23:27:18.022974   12542 start.go:167] duration metric: libmachine.API.Create for "addons-052905" took 26.511320736s
	I1127 23:27:18.022988   12542 start.go:300] post-start starting for "addons-052905" (driver="kvm2")
	I1127 23:27:18.023003   12542 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:27:18.023026   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:18.023270   12542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:27:18.023293   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:18.025597   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.026033   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:18.026074   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.026147   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:18.026349   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:18.026491   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:18.026680   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:18.123153   12542 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:27:18.127296   12542 info.go:137] Remote host: Buildroot 2021.02.12
	I1127 23:27:18.127319   12542 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1127 23:27:18.127404   12542 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1127 23:27:18.127430   12542 start.go:303] post-start completed in 104.434442ms
	I1127 23:27:18.127459   12542 main.go:141] libmachine: (addons-052905) Calling .GetConfigRaw
	I1127 23:27:18.128008   12542 main.go:141] libmachine: (addons-052905) Calling .GetIP
	I1127 23:27:18.130769   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.131123   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:18.131148   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.131374   12542 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/config.json ...
	I1127 23:27:18.131532   12542 start.go:128] duration metric: createHost completed in 26.636530988s
	I1127 23:27:18.131553   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:18.133580   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.133904   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:18.133946   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.134063   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:18.134249   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:18.134405   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:18.134550   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:18.134727   12542 main.go:141] libmachine: Using SSH client type: native
	I1127 23:27:18.135033   12542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1127 23:27:18.135044   12542 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1127 23:27:18.265648   12542 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701127638.238593885
	
	I1127 23:27:18.265671   12542 fix.go:206] guest clock: 1701127638.238593885
	I1127 23:27:18.265680   12542 fix.go:219] Guest: 2023-11-27 23:27:18.238593885 +0000 UTC Remote: 2023-11-27 23:27:18.131544066 +0000 UTC m=+26.746535938 (delta=107.049819ms)
	I1127 23:27:18.265722   12542 fix.go:190] guest clock delta is within tolerance: 107.049819ms
	I1127 23:27:18.265730   12542 start.go:83] releasing machines lock for "addons-052905", held for 26.770848442s
	I1127 23:27:18.265762   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:18.266013   12542 main.go:141] libmachine: (addons-052905) Calling .GetIP
	I1127 23:27:18.269007   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.269294   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:18.269320   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.269462   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:18.269978   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:18.270169   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:18.270295   12542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:27:18.270347   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:18.270393   12542 ssh_runner.go:195] Run: cat /version.json
	I1127 23:27:18.270420   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:18.272814   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.272840   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.273219   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:18.273249   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.273286   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:18.273315   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:18.273374   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:18.273494   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:18.273617   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:18.273700   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:18.273793   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:18.273883   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:18.274028   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:18.274027   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:18.383979   12542 ssh_runner.go:195] Run: systemctl --version
	I1127 23:27:18.389759   12542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:27:18.554486   12542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1127 23:27:18.560426   12542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1127 23:27:18.560494   12542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:27:18.576641   12542 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 23:27:18.576666   12542 start.go:472] detecting cgroup driver to use...
	I1127 23:27:18.576746   12542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:27:18.591675   12542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:27:18.604929   12542 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:27:18.605011   12542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:27:18.619195   12542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:27:18.633312   12542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:27:18.743151   12542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:27:18.864367   12542 docker.go:219] disabling docker service ...
	I1127 23:27:18.864439   12542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:27:18.877308   12542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:27:18.889001   12542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:27:18.998675   12542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:27:19.107645   12542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:27:19.119949   12542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:27:19.137685   12542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:27:19.137760   12542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:27:19.147016   12542 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:27:19.147113   12542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:27:19.156411   12542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:27:19.165711   12542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:27:19.174769   12542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:27:19.184325   12542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:27:19.192231   12542 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1127 23:27:19.192309   12542 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1127 23:27:19.203876   12542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:27:19.213973   12542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:27:19.312499   12542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:27:19.489042   12542 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:27:19.489145   12542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:27:19.494360   12542 start.go:540] Will wait 60s for crictl version
	I1127 23:27:19.494450   12542 ssh_runner.go:195] Run: which crictl
	I1127 23:27:19.499122   12542 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:27:19.540499   12542 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1127 23:27:19.540638   12542 ssh_runner.go:195] Run: crio --version
	I1127 23:27:19.589867   12542 ssh_runner.go:195] Run: crio --version
	I1127 23:27:19.638731   12542 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1127 23:27:19.640094   12542 main.go:141] libmachine: (addons-052905) Calling .GetIP
	I1127 23:27:19.642781   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:19.643191   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:19.643222   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:19.643449   12542 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1127 23:27:19.647834   12542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:27:19.661632   12542 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:27:19.661696   12542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:27:19.695555   12542 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1127 23:27:19.695620   12542 ssh_runner.go:195] Run: which lz4
	I1127 23:27:19.699621   12542 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1127 23:27:19.703874   12542 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 23:27:19.703902   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1127 23:27:21.399946   12542 crio.go:444] Took 1.700363 seconds to copy over tarball
	I1127 23:27:21.400048   12542 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1127 23:27:24.353499   12542 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.953426173s)
	I1127 23:27:24.353525   12542 crio.go:451] Took 2.953546 seconds to extract the tarball
	I1127 23:27:24.353535   12542 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1127 23:27:24.395105   12542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:27:24.466578   12542 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:27:24.466608   12542 cache_images.go:84] Images are preloaded, skipping loading
	I1127 23:27:24.466686   12542 ssh_runner.go:195] Run: crio config
	I1127 23:27:24.520816   12542 cni.go:84] Creating CNI manager for ""
	I1127 23:27:24.520839   12542 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:27:24.520858   12542 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:27:24.520874   12542 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-052905 NodeName:addons-052905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:27:24.521023   12542 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-052905"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:27:24.521086   12542 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-052905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-052905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:27:24.521141   12542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:27:24.530884   12542 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:27:24.530954   12542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:27:24.539917   12542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1127 23:27:24.556109   12542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:27:24.571716   12542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1127 23:27:24.587873   12542 ssh_runner.go:195] Run: grep 192.168.39.221	control-plane.minikube.internal$ /etc/hosts
	I1127 23:27:24.591384   12542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:27:24.603019   12542 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905 for IP: 192.168.39.221
	I1127 23:27:24.603046   12542 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:24.603206   12542 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1127 23:27:24.691023   12542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt ...
	I1127 23:27:24.691055   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt: {Name:mkd55a492d6b19b16796ee7ddacc9a9fa0503157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:24.691205   12542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key ...
	I1127 23:27:24.691219   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key: {Name:mk7dc169ff97e7fdf8bda1783ea499e346747ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:24.691291   12542 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1127 23:27:24.761723   12542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt ...
	I1127 23:27:24.761755   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt: {Name:mk2f14623450393ab29e4712aea188b589f18e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:24.761910   12542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key ...
	I1127 23:27:24.761920   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key: {Name:mk1a0c56e6450f5553ebdba52876cdd10c65f096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:24.762024   12542 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.key
	I1127 23:27:24.762038   12542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt with IP's: []
	I1127 23:27:24.910819   12542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt ...
	I1127 23:27:24.910852   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: {Name:mke07c181958b60375fde9ce5236025b01053918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:24.911002   12542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.key ...
	I1127 23:27:24.911012   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.key: {Name:mkd189ffe6818ba801dd1c9bd018b200a53be551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:24.911078   12542 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.key.52bad639
	I1127 23:27:24.911094   12542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.crt.52bad639 with IP's: [192.168.39.221 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:27:25.063585   12542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.crt.52bad639 ...
	I1127 23:27:25.063615   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.crt.52bad639: {Name:mk2f4174bbd3c64daa3ef43097e67dda6096e49b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:25.063750   12542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.key.52bad639 ...
	I1127 23:27:25.063763   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.key.52bad639: {Name:mkaeb621351122e71317db60c807afcf6599bb44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:25.063829   12542 certs.go:337] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.crt.52bad639 -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.crt
	I1127 23:27:25.063906   12542 certs.go:341] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.key.52bad639 -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.key
	I1127 23:27:25.063952   12542 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/proxy-client.key
	I1127 23:27:25.063963   12542 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/proxy-client.crt with IP's: []
	I1127 23:27:25.122395   12542 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/proxy-client.crt ...
	I1127 23:27:25.122424   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/proxy-client.crt: {Name:mk66d373280a78b33b584b94c1023d130dbd9042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:25.122573   12542 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/proxy-client.key ...
	I1127 23:27:25.122583   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/proxy-client.key: {Name:mkd316bd6f73e6eef85c557fb32152bf7e11b14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:25.122743   12542 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:27:25.122776   12542 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:27:25.122807   12542 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:27:25.122848   12542 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1127 23:27:25.123441   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:27:25.150692   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1127 23:27:25.173693   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:27:25.197458   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 23:27:25.221568   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:27:25.245409   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1127 23:27:25.267727   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:27:25.290400   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:27:25.313115   12542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:27:25.335499   12542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:27:25.351260   12542 ssh_runner.go:195] Run: openssl version
	I1127 23:27:25.356614   12542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:27:25.365847   12542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:27:25.370073   12542 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:27:25.370125   12542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:27:25.375469   12542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:27:25.385179   12542 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:27:25.389483   12542 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:27:25.389524   12542 kubeadm.go:404] StartCluster: {Name:addons-052905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-052905 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:27:25.389611   12542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:27:25.389665   12542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:27:25.429122   12542 cri.go:89] found id: ""
	I1127 23:27:25.429185   12542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:27:25.438059   12542 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:27:25.446704   12542 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:27:25.455670   12542 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:27:25.455713   12542 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1127 23:27:25.510419   12542 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 23:27:25.510524   12542 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:27:25.652038   12542 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:27:25.652176   12542 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:27:25.652299   12542 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:27:25.891637   12542 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:27:26.087399   12542 out.go:204]   - Generating certificates and keys ...
	I1127 23:27:26.087521   12542 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:27:26.087723   12542 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:27:26.202032   12542 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:27:26.282703   12542 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:27:26.544265   12542 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:27:26.755182   12542 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:27:27.049414   12542 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:27:27.049573   12542 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-052905 localhost] and IPs [192.168.39.221 127.0.0.1 ::1]
	I1127 23:27:27.143584   12542 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:27:27.144132   12542 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-052905 localhost] and IPs [192.168.39.221 127.0.0.1 ::1]
	I1127 23:27:27.356968   12542 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:27:27.475293   12542 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:27:27.635434   12542 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:27:27.635529   12542 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:27:27.850337   12542 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:27:27.999526   12542 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:27:28.241802   12542 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:27:28.326108   12542 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:27:28.327018   12542 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:27:28.329469   12542 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:27:28.331530   12542 out.go:204]   - Booting up control plane ...
	I1127 23:27:28.331688   12542 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:27:28.331829   12542 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:27:28.331947   12542 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:27:28.349250   12542 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:27:28.350440   12542 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:27:28.350547   12542 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:27:28.463370   12542 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:27:36.460853   12542 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002228 seconds
	I1127 23:27:36.461003   12542 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:27:36.476262   12542 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:27:37.006162   12542 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:27:37.006410   12542 kubeadm.go:322] [mark-control-plane] Marking the node addons-052905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:27:37.526211   12542 kubeadm.go:322] [bootstrap-token] Using token: 4qf2u5.yu9lpa11ritwseee
	I1127 23:27:37.527672   12542 out.go:204]   - Configuring RBAC rules ...
	I1127 23:27:37.527790   12542 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:27:37.534705   12542 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:27:37.542984   12542 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:27:37.546703   12542 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:27:37.550102   12542 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:27:37.556663   12542 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:27:37.573269   12542 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:27:37.832184   12542 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:27:37.955051   12542 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:27:37.956044   12542 kubeadm.go:322] 
	I1127 23:27:37.956131   12542 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:27:37.956143   12542 kubeadm.go:322] 
	I1127 23:27:37.956235   12542 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:27:37.956259   12542 kubeadm.go:322] 
	I1127 23:27:37.956296   12542 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:27:37.956360   12542 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:27:37.956407   12542 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:27:37.956416   12542 kubeadm.go:322] 
	I1127 23:27:37.956462   12542 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 23:27:37.956467   12542 kubeadm.go:322] 
	I1127 23:27:37.956536   12542 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:27:37.956553   12542 kubeadm.go:322] 
	I1127 23:27:37.956632   12542 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:27:37.956737   12542 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:27:37.956850   12542 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:27:37.956863   12542 kubeadm.go:322] 
	I1127 23:27:37.956963   12542 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:27:37.957061   12542 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:27:37.957075   12542 kubeadm.go:322] 
	I1127 23:27:37.957174   12542 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4qf2u5.yu9lpa11ritwseee \
	I1127 23:27:37.957296   12542 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1127 23:27:37.957327   12542 kubeadm.go:322] 	--control-plane 
	I1127 23:27:37.957343   12542 kubeadm.go:322] 
	I1127 23:27:37.957462   12542 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:27:37.957477   12542 kubeadm.go:322] 
	I1127 23:27:37.957597   12542 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4qf2u5.yu9lpa11ritwseee \
	I1127 23:27:37.957722   12542 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1127 23:27:37.958118   12542 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:27:37.958142   12542 cni.go:84] Creating CNI manager for ""
	I1127 23:27:37.958153   12542 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:27:37.960082   12542 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1127 23:27:37.961464   12542 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1127 23:27:37.994806   12542 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1127 23:27:38.056767   12542 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:27:38.056856   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:38.056886   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=addons-052905 minikube.k8s.io/updated_at=2023_11_27T23_27_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:38.108278   12542 ops.go:34] apiserver oom_adj: -16
	I1127 23:27:38.262182   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:38.367988   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:38.960625   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:39.460831   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:39.960871   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:40.460048   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:40.960453   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:41.460874   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:41.960165   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:42.460546   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:42.960621   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:43.460957   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:43.960657   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:44.460888   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:44.960842   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:45.460043   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:45.960594   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:46.460641   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:46.960832   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:47.460993   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:47.960662   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:48.460806   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:48.960035   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:49.460378   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:49.960344   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:50.460214   12542 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:27:50.600153   12542 kubeadm.go:1081] duration metric: took 12.543355216s to wait for elevateKubeSystemPrivileges.
	I1127 23:27:50.600183   12542 kubeadm.go:406] StartCluster complete in 25.210661077s
	I1127 23:27:50.600205   12542 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:50.600330   12542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:27:50.600961   12542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:27:50.601177   12542 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:27:50.601270   12542 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1127 23:27:50.601401   12542 addons.go:69] Setting volumesnapshots=true in profile "addons-052905"
	I1127 23:27:50.601411   12542 config.go:182] Loaded profile config "addons-052905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:27:50.601416   12542 addons.go:69] Setting ingress-dns=true in profile "addons-052905"
	I1127 23:27:50.601431   12542 addons.go:69] Setting inspektor-gadget=true in profile "addons-052905"
	I1127 23:27:50.601441   12542 addons.go:69] Setting default-storageclass=true in profile "addons-052905"
	I1127 23:27:50.601450   12542 addons.go:69] Setting metrics-server=true in profile "addons-052905"
	I1127 23:27:50.601458   12542 addons.go:69] Setting cloud-spanner=true in profile "addons-052905"
	I1127 23:27:50.601468   12542 addons.go:69] Setting helm-tiller=true in profile "addons-052905"
	I1127 23:27:50.601470   12542 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-052905"
	I1127 23:27:50.601476   12542 addons.go:231] Setting addon cloud-spanner=true in "addons-052905"
	I1127 23:27:50.601482   12542 addons.go:231] Setting addon helm-tiller=true in "addons-052905"
	I1127 23:27:50.601445   12542 addons.go:231] Setting addon ingress-dns=true in "addons-052905"
	I1127 23:27:50.601492   12542 addons.go:69] Setting ingress=true in profile "addons-052905"
	I1127 23:27:50.601520   12542 addons.go:231] Setting addon ingress=true in "addons-052905"
	I1127 23:27:50.601545   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.601552   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.601482   12542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-052905"
	I1127 23:27:50.601559   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.601559   12542 addons.go:69] Setting storage-provisioner=true in profile "addons-052905"
	I1127 23:27:50.601561   12542 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-052905"
	I1127 23:27:50.601443   12542 addons.go:231] Setting addon inspektor-gadget=true in "addons-052905"
	I1127 23:27:50.601598   12542 addons.go:69] Setting registry=true in profile "addons-052905"
	I1127 23:27:50.601461   12542 addons.go:231] Setting addon metrics-server=true in "addons-052905"
	I1127 23:27:50.601642   12542 addons.go:231] Setting addon registry=true in "addons-052905"
	I1127 23:27:50.602026   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.602057   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.602127   12542 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-052905"
	I1127 23:27:50.601554   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.602208   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.601457   12542 addons.go:69] Setting gcp-auth=true in profile "addons-052905"
	I1127 23:27:50.602339   12542 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-052905"
	I1127 23:27:50.602346   12542 mustload.go:65] Loading cluster: addons-052905
	I1127 23:27:50.602360   12542 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-052905"
	I1127 23:27:50.602614   12542 config.go:182] Loaded profile config "addons-052905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:27:50.602671   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.602685   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.602720   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.602724   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.602726   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.602758   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.601487   12542 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-052905"
	I1127 23:27:50.601589   12542 addons.go:231] Setting addon storage-provisioner=true in "addons-052905"
	I1127 23:27:50.602798   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.602827   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.602912   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.602972   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.601423   12542 addons.go:231] Setting addon volumesnapshots=true in "addons-052905"
	I1127 23:27:50.602998   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.603264   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.603322   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.603321   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.603344   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.603353   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.603395   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.603505   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.603530   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.603560   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.603611   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.603952   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.603982   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.604040   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.604061   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.604157   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.604167   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.604206   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.605555   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.605635   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.623807   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I1127 23:27:50.623969   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I1127 23:27:50.624333   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.624414   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I1127 23:27:50.624637   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.624729   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38297
	I1127 23:27:50.624862   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.625117   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.625140   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.625347   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.625467   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.625488   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.625586   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.625597   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.625652   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.626067   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.626094   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.626137   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.626265   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.626283   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.626745   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.626792   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.627127   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.628202   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.628242   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.629712   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.629743   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.630843   12542 addons.go:231] Setting addon default-storageclass=true in "addons-052905"
	I1127 23:27:50.630885   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.631650   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.631681   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.631909   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45611
	I1127 23:27:50.632285   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.632499   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.632593   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.632973   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.632997   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.645175   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.646060   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.646104   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.647629   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1127 23:27:50.648186   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.648740   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.648790   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.649188   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.649405   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.650527   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37829
	I1127 23:27:50.651087   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.652929   12542 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-052905"
	I1127 23:27:50.652977   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.653511   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.653547   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.654234   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42653
	I1127 23:27:50.657599   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.657628   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.657690   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I1127 23:27:50.658386   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.658740   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.658829   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.659086   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.659122   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.659406   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.659421   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.659500   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
	I1127 23:27:50.659579   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43325
	I1127 23:27:50.659654   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.659677   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.659998   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.660044   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.660163   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.660500   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.660846   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.660888   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.661447   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.661470   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.661708   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.661753   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.664475   12542 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1127 23:27:50.662389   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.662421   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.663757   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I1127 23:27:50.666077   12542 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1127 23:27:50.666087   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.666090   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1127 23:27:50.666106   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.667098   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.667565   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.667580   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.667694   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.667708   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.668110   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.668648   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.668687   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.669190   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.669475   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.670402   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.670425   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.670599   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.670619   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.670785   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.670906   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.671018   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.671126   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.682710   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I1127 23:27:50.683288   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.683926   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.683941   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.684288   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.684450   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.686057   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.687897   12542 out.go:177]   - Using image docker.io/registry:2.8.3
	I1127 23:27:50.686844   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33679
	I1127 23:27:50.691197   12542 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1127 23:27:50.690255   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.690454   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39591
	I1127 23:27:50.692017   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I1127 23:27:50.693242   12542 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1127 23:27:50.693255   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1127 23:27:50.693272   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.694069   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.694252   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.694266   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.694679   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.694814   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.694829   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.695547   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.695586   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.695776   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.696363   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.696400   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.697039   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.697186   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I1127 23:27:50.697758   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.697774   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.697821   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.698194   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.698341   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.698374   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.698379   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.698913   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.698970   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I1127 23:27:50.699781   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.700270   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.700291   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.700370   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.700646   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.700814   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.700949   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.701237   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.701255   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.701451   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.701643   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.701787   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.701961   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.702503   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.704530   12542 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1127 23:27:50.703070   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I1127 23:27:50.703106   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.704503   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.706131   12542 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 23:27:50.706141   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1127 23:27:50.706154   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.708069   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1127 23:27:50.707806   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.709336   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.709389   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46477
	I1127 23:27:50.709487   12542 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1127 23:27:50.709501   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1127 23:27:50.709519   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.709559   12542 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1127 23:27:50.710901   12542 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 23:27:50.710916   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1127 23:27:50.710935   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.709701   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.711009   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.709748   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33205
	I1127 23:27:50.710037   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.711329   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.711524   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.711539   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I1127 23:27:50.711733   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.712594   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.712729   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.712743   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.712912   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.713204   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.713223   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.713246   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I1127 23:27:50.713572   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.713657   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.713706   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.713869   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.713882   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.714037   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.714393   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.714437   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.714456   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.714468   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.714572   12542 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-052905" context rescaled to 1 replicas
	I1127 23:27:50.714601   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.714615   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.714609   12542 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:27:50.716444   12542 out.go:177] * Verifying Kubernetes components...
	I1127 23:27:50.714795   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.714884   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.715192   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.715202   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.715279   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.715352   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.715478   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.715696   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.717132   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.717849   12542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:27:50.717940   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.717957   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.718106   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41465
	I1127 23:27:50.718410   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.718495   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.719814   12542 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1127 23:27:50.718516   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.718541   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.718579   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.718602   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.718806   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.718835   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.719181   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I1127 23:27:50.720444   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.720689   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I1127 23:27:50.721170   12542 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1127 23:27:50.721189   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1127 23:27:50.721203   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.722867   12542 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:27:50.721582   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.721719   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.721931   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.722004   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.722338   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.722370   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.723614   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:50.724390   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:50.724416   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:50.724606   12542 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:27:50.724620   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:27:50.724691   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.724698   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.724854   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.724913   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.724934   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.725176   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.725186   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.725694   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.725851   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.725954   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.725993   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.726005   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.726068   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.726093   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.726205   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.726258   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.726435   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.726682   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.726761   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.726850   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.727154   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.727847   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.728076   12542 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:27:50.728092   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:27:50.728107   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.728276   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45501
	I1127 23:27:50.728410   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.731116   12542 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1127 23:27:50.728821   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.728863   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.730713   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.731425   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.732045   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.732938   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.732967   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.733055   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.733079   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.732721   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.732887   12542 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1127 23:27:50.733150   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1127 23:27:50.733168   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.733664   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.733730   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.733908   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.733967   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.734201   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.734223   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.735607   12542 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:27:50.734414   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.736609   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.737269   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.737299   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.739309   12542 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:27:50.737427   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.737160   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.742491   12542 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1127 23:27:50.741231   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.741487   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.744121   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.744239   12542 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 23:27:50.744261   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1127 23:27:50.744277   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.744272   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.744458   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.745038   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35759
	I1127 23:27:50.745543   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.746964   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.746990   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.746992   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.748593   12542 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1127 23:27:50.747468   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.748794   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.749315   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.749564   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42183
	I1127 23:27:50.751421   12542 out.go:177]   - Using image docker.io/busybox:stable
	I1127 23:27:50.749606   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I1127 23:27:50.750068   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.750201   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.750224   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.750460   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.752808   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.752911   12542 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 23:27:50.752928   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1127 23:27:50.752945   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.752955   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.753318   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.753334   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.753387   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:50.753395   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.753793   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:50.753811   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:50.754169   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.754343   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:50.754396   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:50.754468   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.756239   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1127 23:27:50.754940   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.755934   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:50.755960   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.756430   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.759045   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1127 23:27:50.757615   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.757755   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.761911   12542 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1127 23:27:50.763177   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1127 23:27:50.760561   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.760770   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.766103   12542 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 23:27:50.766117   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 23:27:50.766132   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.767564   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1127 23:27:50.764800   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.770321   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1127 23:27:50.769105   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.769723   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.771604   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.772886   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1127 23:27:50.771621   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.771785   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.774501   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1127 23:27:50.773122   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.774702   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:50.776065   12542 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1127 23:27:50.777375   12542 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1127 23:27:50.777395   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1127 23:27:50.777413   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:50.781322   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.781854   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:50.781885   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:50.782003   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:50.782191   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:50.782382   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:50.782522   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	W1127 23:27:50.783592   12542 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35968->192.168.39.221:22: read: connection reset by peer
	I1127 23:27:50.783613   12542 retry.go:31] will retry after 252.729176ms: ssh: handshake failed: read tcp 192.168.39.1:35968->192.168.39.221:22: read: connection reset by peer
	I1127 23:27:50.961721   12542 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:27:50.977985   12542 node_ready.go:35] waiting up to 6m0s for node "addons-052905" to be "Ready" ...
	I1127 23:27:50.998718   12542 node_ready.go:49] node "addons-052905" has status "Ready":"True"
	I1127 23:27:50.998738   12542 node_ready.go:38] duration metric: took 20.705903ms waiting for node "addons-052905" to be "Ready" ...
	I1127 23:27:50.998746   12542 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:27:51.031534   12542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:51.064899   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:27:51.131508   12542 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1127 23:27:51.131530   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1127 23:27:51.177363   12542 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 23:27:51.177394   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1127 23:27:51.191796   12542 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1127 23:27:51.191837   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1127 23:27:51.250626   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 23:27:51.265999   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:27:51.287575   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 23:27:51.325260   12542 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1127 23:27:51.325286   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1127 23:27:51.342687   12542 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1127 23:27:51.342712   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1127 23:27:51.349567   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1127 23:27:51.360645   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 23:27:51.377553   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 23:27:51.378747   12542 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1127 23:27:51.378765   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1127 23:27:51.388832   12542 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 23:27:51.388860   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 23:27:51.439372   12542 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1127 23:27:51.439396   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1127 23:27:51.479195   12542 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1127 23:27:51.479223   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1127 23:27:51.481836   12542 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1127 23:27:51.481859   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1127 23:27:51.533214   12542 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1127 23:27:51.533243   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1127 23:27:51.753672   12542 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 23:27:51.753700   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 23:27:51.805951   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1127 23:27:51.821669   12542 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1127 23:27:51.821694   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1127 23:27:51.828892   12542 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1127 23:27:51.828927   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1127 23:27:51.840923   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1127 23:27:51.873153   12542 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1127 23:27:51.873178   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1127 23:27:51.940526   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 23:27:51.951954   12542 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1127 23:27:51.951982   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1127 23:27:51.984724   12542 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1127 23:27:51.984770   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1127 23:27:52.000529   12542 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1127 23:27:52.000558   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1127 23:27:52.007627   12542 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1127 23:27:52.007646   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1127 23:27:52.051599   12542 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1127 23:27:52.051622   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1127 23:27:52.067413   12542 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1127 23:27:52.067432   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1127 23:27:52.081503   12542 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:27:52.081522   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1127 23:27:52.141815   12542 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 23:27:52.141835   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1127 23:27:52.152899   12542 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1127 23:27:52.152922   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1127 23:27:52.169759   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:27:52.196484   12542 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1127 23:27:52.196508   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1127 23:27:52.215776   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 23:27:52.260595   12542 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1127 23:27:52.260627   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1127 23:27:52.322561   12542 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1127 23:27:52.322589   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1127 23:27:52.374919   12542 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1127 23:27:52.374940   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1127 23:27:52.422547   12542 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 23:27:52.422581   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1127 23:27:52.463691   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 23:27:54.044019   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:55.009088   12542 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.04733179s)
	I1127 23:27:55.009124   12542 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1127 23:27:56.088963   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:58.057033   12542 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1127 23:27:58.057071   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:58.060772   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:58.061274   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:58.061304   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:58.061493   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:58.061684   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:58.061851   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:58.062000   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:58.119092   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:58.354959   12542 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1127 23:27:58.433316   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.368372198s)
	I1127 23:27:58.433370   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:58.433382   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:58.433688   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:58.433715   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:58.433732   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:58.433753   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:58.433766   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:58.434082   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:58.434099   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:58.501739   12542 addons.go:231] Setting addon gcp-auth=true in "addons-052905"
	I1127 23:27:58.501800   12542 host.go:66] Checking if "addons-052905" exists ...
	I1127 23:27:58.502223   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:58.502266   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:58.517123   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I1127 23:27:58.517554   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:58.518073   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:58.518101   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:58.518434   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:58.519021   12542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:27:58.519058   12542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:27:58.555666   12542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I1127 23:27:58.556134   12542 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:27:58.556657   12542 main.go:141] libmachine: Using API Version  1
	I1127 23:27:58.556682   12542 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:27:58.557034   12542 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:27:58.557239   12542 main.go:141] libmachine: (addons-052905) Calling .GetState
	I1127 23:27:58.558894   12542 main.go:141] libmachine: (addons-052905) Calling .DriverName
	I1127 23:27:58.559113   12542 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1127 23:27:58.559133   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHHostname
	I1127 23:27:58.561970   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:58.562433   12542 main.go:141] libmachine: (addons-052905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:9e:b0", ip: ""} in network mk-addons-052905: {Iface:virbr1 ExpiryTime:2023-11-28 00:27:07 +0000 UTC Type:0 Mac:52:54:00:ec:9e:b0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-052905 Clientid:01:52:54:00:ec:9e:b0}
	I1127 23:27:58.562463   12542 main.go:141] libmachine: (addons-052905) DBG | domain addons-052905 has defined IP address 192.168.39.221 and MAC address 52:54:00:ec:9e:b0 in network mk-addons-052905
	I1127 23:27:58.562572   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHPort
	I1127 23:27:58.562736   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHKeyPath
	I1127 23:27:58.562905   12542 main.go:141] libmachine: (addons-052905) Calling .GetSSHUsername
	I1127 23:27:58.563036   12542 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/addons-052905/id_rsa Username:docker}
	I1127 23:27:58.624006   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.373341555s)
	I1127 23:27:58.624035   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.358001676s)
	I1127 23:27:58.624054   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:58.624065   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:58.624068   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:58.624079   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:58.624299   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:58.624315   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:58.624328   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:58.624339   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:58.624471   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:58.624516   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:58.624540   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:58.624555   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:58.624482   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:58.624651   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:58.624681   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:58.624696   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:58.624728   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:58.624743   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:58.842537   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:58.842560   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:58.842849   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:58.842854   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:58.842877   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	W1127 23:27:58.843023   12542 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1127 23:27:58.905933   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:58.905954   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:58.906215   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:58.906242   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.908717   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.621104002s)
	I1127 23:27:59.908773   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.55916487s)
	I1127 23:27:59.908792   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.908807   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.908806   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.908817   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.548145663s)
	I1127 23:27:59.908844   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.908846   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.531267547s)
	I1127 23:27:59.908819   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.908864   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.908912   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.1029326s)
	I1127 23:27:59.908925   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.908929   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.908938   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.908864   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.909026   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.068069053s)
	I1127 23:27:59.909050   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.909067   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.909165   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.909179   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.909190   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.909190   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.909203   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.909200   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.909214   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.909226   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.909420   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.968859764s)
	I1127 23:27:59.909427   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.909441   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.909453   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.909457   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.909474   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.909483   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.909491   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.909579   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.739788862s)
	W1127 23:27:59.909609   12542 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 23:27:59.909625   12542 retry.go:31] will retry after 332.499412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 23:27:59.909688   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.69388278s)
	I1127 23:27:59.909702   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.909711   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.909824   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.909859   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.909884   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.909912   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.909921   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.909931   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.909939   12542 addons.go:467] Verifying addon registry=true in "addons-052905"
	I1127 23:27:59.913396   12542 out.go:177] * Verifying registry addon...
	I1127 23:27:59.910089   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913430   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.913444   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.913454   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.910106   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.910123   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913507   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.910138   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.910153   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913535   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.913549   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.909922   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.913587   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.910249   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913602   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.913610   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.912148   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.913620   12542 addons.go:467] Verifying addon ingress=true in "addons-052905"
	I1127 23:27:59.912174   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913638   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.913652   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.913661   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.912193   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.912220   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913749   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.913762   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.915425   12542 out.go:177] * Verifying ingress addon...
	I1127 23:27:59.913520   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:27:59.913558   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.913724   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913843   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.913921   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913973   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.913992   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.916685   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:27:59.916736   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.916748   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.916775   12542 addons.go:467] Verifying addon metrics-server=true in "addons-052905"
	I1127 23:27:59.916867   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.916935   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.917035   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.917051   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.917076   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:27:59.917446   12542 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1127 23:27:59.917542   12542 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1127 23:27:59.918961   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:27:59.918974   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:27:59.929356   12542 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1127 23:27:59.929373   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:59.931512   12542 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 23:27:59.931525   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:27:59.951945   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:27:59.953329   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:00.242688   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:28:00.393239   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:00.494280   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:00.494429   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:00.640782   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.176999236s)
	I1127 23:28:00.640803   12542 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.081670232s)
	I1127 23:28:00.640844   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:28:00.640859   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:28:00.642829   12542 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:28:00.641135   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:28:00.641166   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:28:00.645768   12542 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1127 23:28:00.644358   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:28:00.647136   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:28:00.647145   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:28:00.647210   12542 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1127 23:28:00.647234   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1127 23:28:00.647441   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:28:00.647457   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:28:00.647466   12542 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-052905"
	I1127 23:28:00.647478   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:28:00.648792   12542 out.go:177] * Verifying csi-hostpath-driver addon...
	I1127 23:28:00.650729   12542 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1127 23:28:00.692872   12542 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 23:28:00.692894   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:00.742546   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:00.773607   12542 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1127 23:28:00.773631   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1127 23:28:00.816964   12542 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 23:28:00.816994   12542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1127 23:28:00.950913   12542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 23:28:01.052350   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:01.054320   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:01.313226   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:01.481806   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:01.482229   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:01.757953   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:02.043102   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:02.043247   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:02.272417   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:02.482585   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:02.487923   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:02.753873   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:02.765842   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:02.964011   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:02.964130   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:03.039988   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.797256217s)
	I1127 23:28:03.040049   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:28:03.040059   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:28:03.040351   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:28:03.040397   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:28:03.040410   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:28:03.040419   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:28:03.040438   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:28:03.040658   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:28:03.040698   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:28:03.265792   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:03.504176   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:03.504187   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:03.511464   12542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.560501149s)
	I1127 23:28:03.511531   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:28:03.511549   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:28:03.511819   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:28:03.511880   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:28:03.511925   12542 main.go:141] libmachine: Making call to close driver server
	I1127 23:28:03.511937   12542 main.go:141] libmachine: (addons-052905) Calling .Close
	I1127 23:28:03.512163   12542 main.go:141] libmachine: (addons-052905) DBG | Closing plugin on server side
	I1127 23:28:03.512187   12542 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:28:03.512221   12542 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:28:03.513940   12542 addons.go:467] Verifying addon gcp-auth=true in "addons-052905"
	I1127 23:28:03.515843   12542 out.go:177] * Verifying gcp-auth addon...
	I1127 23:28:03.518358   12542 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1127 23:28:03.576110   12542 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1127 23:28:03.576132   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:03.590148   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:03.750356   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:03.961052   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:03.971030   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:04.099371   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:04.249114   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:04.465971   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:04.479472   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:04.594687   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:04.749539   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:04.961028   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:04.961196   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:05.096124   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:05.248298   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:05.252093   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:05.456880   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:05.459825   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:05.595029   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:05.753151   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:05.959172   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:05.960398   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:06.095431   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:06.251564   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:06.458511   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:06.459643   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:06.594725   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:06.754243   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:06.959436   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:06.960649   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:07.094190   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:07.263026   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:07.264815   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:07.459123   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:07.459440   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:07.605714   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:07.751069   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:07.968520   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:07.975797   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:08.115041   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:08.249432   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:08.461484   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:08.462240   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:08.596937   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:08.749524   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:08.957058   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:08.962402   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:09.096664   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:09.248252   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:09.465746   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:09.466590   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:09.596195   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:09.755051   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:09.757487   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:09.959686   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:09.960213   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:10.095373   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:10.249118   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:10.489501   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:10.495140   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:10.594213   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:10.751076   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:10.960490   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:10.962100   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:11.094987   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:11.258195   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:11.456359   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:11.462957   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:11.607103   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:11.766888   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:11.779745   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:11.965824   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:11.974706   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:12.094361   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:12.250099   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:12.460420   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:12.462548   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:12.594774   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:12.751289   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:12.960767   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:12.961421   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:13.099043   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:13.257684   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:13.457616   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:13.459816   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:13.594219   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:13.749133   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:13.957319   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:13.958092   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:14.094799   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:14.250451   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:14.251386   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:14.460950   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:14.462596   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:14.594337   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:14.749846   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:14.958337   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:14.959380   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:15.094757   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:15.250448   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:15.460985   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:15.462411   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:15.599872   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:15.759387   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:15.958489   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:15.960042   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:16.094706   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:16.255322   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:16.255904   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:16.465291   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:16.476560   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:16.645483   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:16.755589   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:16.963646   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:16.967258   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:17.095478   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:17.250540   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:17.468611   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:17.470036   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:17.594575   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:17.751399   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:17.963241   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:17.963275   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:18.097226   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:18.248006   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:18.457233   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:18.457838   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:18.595024   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:18.749587   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:18.758811   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:18.958108   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:18.960122   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:19.094019   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:19.248259   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:19.458233   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:19.458379   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:19.595038   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:19.752115   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:19.957391   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:19.960144   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:20.094250   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:20.253370   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:20.458225   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:20.458548   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:20.594076   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:20.748881   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:20.957860   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:20.957928   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:21.094131   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:21.249057   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:21.250482   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:21.458144   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:21.458289   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:21.596944   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:21.749812   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:21.958734   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:21.959468   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:22.093835   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:22.249252   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:22.457024   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:22.459293   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:22.595002   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:22.749019   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:22.957774   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:22.958301   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:23.095367   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:23.249148   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:23.460677   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:23.461967   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:23.594539   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:23.749762   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:23.751449   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:23.958751   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:23.958938   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:24.094360   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:24.266342   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:24.456718   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:24.458149   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:24.594488   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:24.749192   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:24.958532   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:24.958961   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:25.094542   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:25.256404   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:25.459174   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:25.459399   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:25.594960   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:25.756510   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:25.758216   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:25.959842   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:25.959967   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:26.095749   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:26.248915   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:26.457962   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:26.458752   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:26.595793   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:26.749044   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:26.957715   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:26.960719   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:27.095273   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:27.255964   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:27.458201   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:27.459543   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:27.595129   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:27.750159   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:27.962937   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:27.964279   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:28.095236   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:28.249566   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:28.250592   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:28.459220   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:28.460497   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:28.595025   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:28.755803   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:28.984784   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:28.994307   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:29.094542   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:29.262065   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:29.461900   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:29.462699   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:29.594013   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:29.758466   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:29.959908   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:29.963545   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:30.095518   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:30.249787   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:30.253341   12542 pod_ready.go:102] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"False"
	I1127 23:28:30.459218   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:30.459388   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:30.594885   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:30.752655   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:30.964165   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:30.966353   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:31.094858   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:31.249493   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:31.251401   12542 pod_ready.go:92] pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace has status "Ready":"True"
	I1127 23:28:31.251422   12542 pod_ready.go:81] duration metric: took 40.219854463s waiting for pod "coredns-5dd5756b68-5kgzx" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.251436   12542 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-b8ct7" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.253248   12542 pod_ready.go:97] error getting pod "coredns-5dd5756b68-b8ct7" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-b8ct7" not found
	I1127 23:28:31.253269   12542 pod_ready.go:81] duration metric: took 1.826515ms waiting for pod "coredns-5dd5756b68-b8ct7" in "kube-system" namespace to be "Ready" ...
	E1127 23:28:31.253278   12542 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-b8ct7" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-b8ct7" not found
	I1127 23:28:31.253284   12542 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-052905" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.257648   12542 pod_ready.go:92] pod "etcd-addons-052905" in "kube-system" namespace has status "Ready":"True"
	I1127 23:28:31.257663   12542 pod_ready.go:81] duration metric: took 4.372803ms waiting for pod "etcd-addons-052905" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.257671   12542 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-052905" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.262307   12542 pod_ready.go:92] pod "kube-apiserver-addons-052905" in "kube-system" namespace has status "Ready":"True"
	I1127 23:28:31.262322   12542 pod_ready.go:81] duration metric: took 4.646467ms waiting for pod "kube-apiserver-addons-052905" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.262335   12542 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-052905" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.267288   12542 pod_ready.go:92] pod "kube-controller-manager-addons-052905" in "kube-system" namespace has status "Ready":"True"
	I1127 23:28:31.267303   12542 pod_ready.go:81] duration metric: took 4.962831ms waiting for pod "kube-controller-manager-addons-052905" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.267311   12542 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4xph4" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.447470   12542 pod_ready.go:92] pod "kube-proxy-4xph4" in "kube-system" namespace has status "Ready":"True"
	I1127 23:28:31.447495   12542 pod_ready.go:81] duration metric: took 180.178155ms waiting for pod "kube-proxy-4xph4" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.447504   12542 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-052905" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.458394   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:31.458527   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:31.594194   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:31.749132   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:31.848321   12542 pod_ready.go:92] pod "kube-scheduler-addons-052905" in "kube-system" namespace has status "Ready":"True"
	I1127 23:28:31.848343   12542 pod_ready.go:81] duration metric: took 400.832728ms waiting for pod "kube-scheduler-addons-052905" in "kube-system" namespace to be "Ready" ...
	I1127 23:28:31.848351   12542 pod_ready.go:38] duration metric: took 40.849595834s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:28:31.848368   12542 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:28:31.848429   12542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:28:31.881277   12542 api_server.go:72] duration metric: took 41.166631025s to wait for apiserver process to appear ...
	I1127 23:28:31.881309   12542 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:28:31.881327   12542 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I1127 23:28:31.888167   12542 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I1127 23:28:31.889462   12542 api_server.go:141] control plane version: v1.28.4
	I1127 23:28:31.889489   12542 api_server.go:131] duration metric: took 8.172383ms to wait for apiserver health ...
	I1127 23:28:31.889500   12542 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:28:31.957361   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:31.959712   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:32.055631   12542 system_pods.go:59] 18 kube-system pods found
	I1127 23:28:32.055659   12542 system_pods.go:61] "coredns-5dd5756b68-5kgzx" [6d98b2f6-473f-40f9-ba4f-e5f7c166de81] Running
	I1127 23:28:32.055664   12542 system_pods.go:61] "csi-hostpath-attacher-0" [b0c714b2-674d-450b-b4ce-10a9a9454854] Running
	I1127 23:28:32.055668   12542 system_pods.go:61] "csi-hostpath-resizer-0" [8a413137-cdd4-4995-99ab-87552792609c] Running
	I1127 23:28:32.055675   12542 system_pods.go:61] "csi-hostpathplugin-cdrbk" [4a39ed46-aef4-4953-82a6-04509e42e9d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1127 23:28:32.055682   12542 system_pods.go:61] "etcd-addons-052905" [ec11f619-c13f-495b-b9f2-731e7fb5c351] Running
	I1127 23:28:32.055687   12542 system_pods.go:61] "kube-apiserver-addons-052905" [039c8064-aee8-4b84-8789-3d11e872a159] Running
	I1127 23:28:32.055691   12542 system_pods.go:61] "kube-controller-manager-addons-052905" [8807511c-4fb6-485c-a192-a2493dd1bebd] Running
	I1127 23:28:32.055696   12542 system_pods.go:61] "kube-ingress-dns-minikube" [cc23eea2-e3e2-4597-95d0-bd6b89799a99] Running
	I1127 23:28:32.055700   12542 system_pods.go:61] "kube-proxy-4xph4" [df916a46-bee3-44ec-bfec-41ce80b5126f] Running
	I1127 23:28:32.055704   12542 system_pods.go:61] "kube-scheduler-addons-052905" [3791ae5b-92dd-4b26-b64f-c32c4af19abf] Running
	I1127 23:28:32.055710   12542 system_pods.go:61] "metrics-server-7c66d45ddc-pkfgc" [ab570363-b34e-40b9-babf-b27b0101e455] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 23:28:32.055720   12542 system_pods.go:61] "nvidia-device-plugin-daemonset-d844x" [810a535e-867e-4bfa-bc47-26b4aee7c94b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1127 23:28:32.055726   12542 system_pods.go:61] "registry-fw9r2" [e8da5a5e-d8d8-4c96-a74e-61eb7f679a4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1127 23:28:32.055732   12542 system_pods.go:61] "registry-proxy-dsmwb" [cae4e0b6-6db5-42bc-b440-c55e0d493d8f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1127 23:28:32.055741   12542 system_pods.go:61] "snapshot-controller-58dbcc7b99-j9r6x" [ae29e357-233a-4f9e-8ecd-22c298b013dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1127 23:28:32.055746   12542 system_pods.go:61] "snapshot-controller-58dbcc7b99-np4qs" [3788edc0-bb74-4d56-89c4-c60289985e2d] Running
	I1127 23:28:32.055751   12542 system_pods.go:61] "storage-provisioner" [b8288233-a03b-4f81-be69-16066955226c] Running
	I1127 23:28:32.055758   12542 system_pods.go:61] "tiller-deploy-7b677967b9-ng6qr" [52ea092e-2863-40a3-9738-710b1a17e38a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1127 23:28:32.055763   12542 system_pods.go:74] duration metric: took 166.259058ms to wait for pod list to return data ...
	I1127 23:28:32.055778   12542 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:28:32.094715   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:32.249338   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:32.249471   12542 default_sa.go:45] found service account: "default"
	I1127 23:28:32.249492   12542 default_sa.go:55] duration metric: took 193.706387ms for default service account to be created ...
	I1127 23:28:32.249503   12542 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:28:32.457300   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:32.457703   12542 system_pods.go:86] 18 kube-system pods found
	I1127 23:28:32.457724   12542 system_pods.go:89] "coredns-5dd5756b68-5kgzx" [6d98b2f6-473f-40f9-ba4f-e5f7c166de81] Running
	I1127 23:28:32.457729   12542 system_pods.go:89] "csi-hostpath-attacher-0" [b0c714b2-674d-450b-b4ce-10a9a9454854] Running
	I1127 23:28:32.457734   12542 system_pods.go:89] "csi-hostpath-resizer-0" [8a413137-cdd4-4995-99ab-87552792609c] Running
	I1127 23:28:32.457740   12542 system_pods.go:89] "csi-hostpathplugin-cdrbk" [4a39ed46-aef4-4953-82a6-04509e42e9d1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1127 23:28:32.457748   12542 system_pods.go:89] "etcd-addons-052905" [ec11f619-c13f-495b-b9f2-731e7fb5c351] Running
	I1127 23:28:32.457754   12542 system_pods.go:89] "kube-apiserver-addons-052905" [039c8064-aee8-4b84-8789-3d11e872a159] Running
	I1127 23:28:32.457761   12542 system_pods.go:89] "kube-controller-manager-addons-052905" [8807511c-4fb6-485c-a192-a2493dd1bebd] Running
	I1127 23:28:32.457765   12542 system_pods.go:89] "kube-ingress-dns-minikube" [cc23eea2-e3e2-4597-95d0-bd6b89799a99] Running
	I1127 23:28:32.457769   12542 system_pods.go:89] "kube-proxy-4xph4" [df916a46-bee3-44ec-bfec-41ce80b5126f] Running
	I1127 23:28:32.457773   12542 system_pods.go:89] "kube-scheduler-addons-052905" [3791ae5b-92dd-4b26-b64f-c32c4af19abf] Running
	I1127 23:28:32.457782   12542 system_pods.go:89] "metrics-server-7c66d45ddc-pkfgc" [ab570363-b34e-40b9-babf-b27b0101e455] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 23:28:32.457794   12542 system_pods.go:89] "nvidia-device-plugin-daemonset-d844x" [810a535e-867e-4bfa-bc47-26b4aee7c94b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1127 23:28:32.457802   12542 system_pods.go:89] "registry-fw9r2" [e8da5a5e-d8d8-4c96-a74e-61eb7f679a4d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1127 23:28:32.457809   12542 system_pods.go:89] "registry-proxy-dsmwb" [cae4e0b6-6db5-42bc-b440-c55e0d493d8f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1127 23:28:32.457816   12542 system_pods.go:89] "snapshot-controller-58dbcc7b99-j9r6x" [ae29e357-233a-4f9e-8ecd-22c298b013dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1127 23:28:32.457823   12542 system_pods.go:89] "snapshot-controller-58dbcc7b99-np4qs" [3788edc0-bb74-4d56-89c4-c60289985e2d] Running
	I1127 23:28:32.457829   12542 system_pods.go:89] "storage-provisioner" [b8288233-a03b-4f81-be69-16066955226c] Running
	I1127 23:28:32.457837   12542 system_pods.go:89] "tiller-deploy-7b677967b9-ng6qr" [52ea092e-2863-40a3-9738-710b1a17e38a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1127 23:28:32.457846   12542 system_pods.go:126] duration metric: took 208.337315ms to wait for k8s-apps to be running ...
	I1127 23:28:32.457852   12542 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:28:32.457895   12542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:28:32.459891   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:32.489079   12542 system_svc.go:56] duration metric: took 31.214604ms WaitForService to wait for kubelet.
	I1127 23:28:32.489108   12542 kubeadm.go:581] duration metric: took 41.774469409s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:28:32.489130   12542 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:28:32.595332   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:32.656479   12542 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 23:28:32.656541   12542 node_conditions.go:123] node cpu capacity is 2
	I1127 23:28:32.656558   12542 node_conditions.go:105] duration metric: took 167.42206ms to run NodePressure ...
	I1127 23:28:32.656574   12542 start.go:228] waiting for startup goroutines ...
	I1127 23:28:32.757573   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:32.958725   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:32.960160   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:33.094963   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:33.248549   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:33.458370   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:33.459744   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:33.594687   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:33.756410   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:33.958307   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:33.959117   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:34.097211   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:34.249817   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:34.458141   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:34.464107   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:34.594586   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:34.748698   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:34.957534   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:34.959988   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:35.095581   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:35.248298   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:35.709036   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:35.709271   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:35.710646   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:35.762278   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:35.959607   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:35.959814   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:36.107767   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:36.273452   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:36.457873   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:36.459170   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:36.605376   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:36.750490   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:36.958484   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:36.968022   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:37.099653   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:37.255943   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:37.458229   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:37.460878   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:37.594052   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:37.755849   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:37.964413   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:37.976119   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:38.102625   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:38.249335   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:38.458758   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:38.458893   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:38.594116   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:38.748829   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:38.958307   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:38.959362   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:39.095430   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:39.248977   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:39.457356   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:39.458235   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:39.594726   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:39.751403   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:39.966064   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:39.968459   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:40.094761   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:40.253672   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:40.458774   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:40.462193   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:40.594619   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:40.749886   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:40.957540   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:40.958146   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:41.094673   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:41.250956   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:41.461021   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:41.466721   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:41.594606   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:41.753696   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:41.959693   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:41.960228   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:42.094334   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:42.252388   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:42.459214   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:42.460381   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:42.594577   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:42.750849   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:42.957949   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:42.957999   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:43.094492   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:43.248564   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:43.456694   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:43.458300   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:43.594144   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:43.750607   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:43.958404   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:43.959053   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:44.551204   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:44.556144   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:44.558932   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:44.558987   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:44.594551   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:44.776047   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:44.959168   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:44.960080   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:45.094517   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:45.252175   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:45.457685   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:45.459106   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:45.593874   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:45.749596   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:45.959034   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:45.959136   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:46.094804   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:46.248849   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:46.803983   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:46.817677   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:46.839385   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:46.839512   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:46.957827   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:46.960202   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:47.094495   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:47.248351   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:47.456950   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:47.458512   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:47.595318   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:47.749023   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:47.959966   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:47.960725   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:48.094946   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:48.252139   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:48.458939   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:48.459578   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:48.596107   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:48.748505   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:48.959663   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:48.963386   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:49.097094   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:49.251986   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:49.457842   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:49.458104   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:49.594733   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:49.748304   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:49.958569   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:49.959117   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:50.094648   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:50.251715   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:50.458546   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:50.459385   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:50.595938   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:50.751887   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:50.959427   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:50.961593   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:51.094463   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:51.248979   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:51.457891   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:51.461214   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:51.595853   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:51.757133   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:51.959039   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:51.961563   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:52.094648   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:52.249865   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:52.457591   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:52.457988   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:52.594562   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:52.749353   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:52.960992   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:52.963809   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:53.094963   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:53.249811   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:53.458721   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:53.459125   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:53.595442   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:53.750223   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:53.959372   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:53.959653   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:54.095057   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:54.250231   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:54.460176   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:54.461019   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:54.594024   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:54.764290   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:54.958630   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:54.961343   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:55.095620   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:55.251061   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:55.459795   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:55.470572   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:55.593977   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:55.754074   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:55.957863   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:55.960070   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:56.097007   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:56.253448   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:56.456598   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:56.458544   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:56.593825   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:56.755481   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:57.324042   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:57.325753   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:57.325922   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:57.332166   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:57.458011   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:57.459953   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:57.594469   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:57.749418   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:57.958100   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:57.961011   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:58.094525   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:58.249217   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:58.457799   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:58.458429   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:58.598918   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:58.750691   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:58.957313   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:58.959166   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:59.094314   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:59.250205   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:59.457542   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:28:59.466774   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:59.593493   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:59.748928   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:28:59.964031   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:28:59.969063   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:00.094386   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:00.249365   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:00.458314   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:00.461036   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:00.593801   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:00.753986   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:00.958396   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:00.961207   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:01.094532   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:01.262255   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:01.458022   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:01.460449   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:01.594787   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:01.749477   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:01.958922   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:01.958998   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:02.094785   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:02.249639   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:02.456741   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:02.458956   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:02.594460   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:02.749918   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:02.958281   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:02.958672   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:03.095630   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:03.251027   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:03.791126   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:03.791735   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:03.791799   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:03.793397   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:03.959563   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:03.961436   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:04.095864   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:04.254551   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:04.472881   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:04.480315   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:04.595008   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:04.749588   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:04.960882   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:04.975858   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:05.094557   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:05.248747   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:05.456844   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:29:05.458083   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:05.594287   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:05.750154   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:05.957171   12542 kapi.go:107] duration metric: took 1m6.039629775s to wait for kubernetes.io/minikube-addons=registry ...
	I1127 23:29:05.959233   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:06.095096   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:06.252149   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:06.457791   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:06.594244   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:06.748717   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:06.963714   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:07.095940   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:07.250007   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:07.458262   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:07.594660   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:07.756920   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:07.959668   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:08.094741   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:08.249215   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:08.459583   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:08.594110   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:08.775525   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:08.959690   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:09.098865   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:09.264904   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:09.460840   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:09.595489   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:09.773765   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:09.959556   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:10.094538   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:10.249798   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:10.459248   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:10.594681   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:10.753210   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:10.961276   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:11.095068   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:11.255796   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:11.458110   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:11.594665   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:11.750995   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:11.957917   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:12.093996   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:12.253337   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:12.458266   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:12.594353   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:12.749551   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:12.965586   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:13.094167   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:13.249095   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:13.457955   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:13.594364   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:13.753054   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:13.958006   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:14.094026   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:14.248551   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:14.458155   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:14.595305   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:14.753574   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:14.958294   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:15.538431   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:15.539435   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:15.541009   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:15.594177   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:15.750656   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:15.961088   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:16.095399   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:16.250070   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:16.458357   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:16.595016   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:16.748685   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:16.958831   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:17.094371   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:17.249505   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:17.457905   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:17.594204   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:17.748742   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:17.958874   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:18.094538   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:18.253669   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:18.461726   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:18.601858   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:18.750070   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:19.300621   12542 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:29:19.302106   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:19.309137   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:19.462865   12542 kapi.go:107] duration metric: took 1m19.545413983s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1127 23:29:19.597339   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:19.749738   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:20.097147   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:20.269571   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:20.594849   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:20.749112   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:21.094938   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:21.278358   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:21.597486   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:21.750427   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:22.096574   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:22.249590   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:22.594309   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:22.753496   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:23.095386   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:23.249095   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:23.594128   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:23.758742   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:24.094996   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:24.248492   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:24.595001   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:24.749964   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:25.095101   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:25.250011   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:25.594847   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:25.747972   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:29:26.094969   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:26.248465   12542 kapi.go:107] duration metric: took 1m25.597732545s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1127 23:29:26.594795   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:27.095350   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:27.594682   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:28.094851   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:28.595073   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:29.094205   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:29.594378   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:30.094629   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:30.595084   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:31.094024   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:31.595021   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:32.093792   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:32.595059   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:33.094475   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:33.594820   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:34.095376   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:34.594510   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:35.094518   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:35.595029   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:36.095247   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:36.595306   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:37.094611   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:37.594625   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:38.095042   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:38.594082   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:39.093708   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:39.595189   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:40.095341   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:40.594693   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:41.094790   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:41.594813   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:42.094423   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:42.594490   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:43.094383   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:43.594615   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:44.094422   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:44.594631   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:45.094992   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:45.593666   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:46.095156   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:46.594434   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:47.094193   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:47.594987   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:48.093985   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:48.594304   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:49.093974   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:49.593771   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:50.094480   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:50.594497   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:51.095181   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:51.595460   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:52.094952   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:52.596035   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:53.094168   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:53.594559   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:54.094576   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:54.594504   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:55.094546   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:55.594575   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:56.094947   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:56.594388   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:57.094978   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:57.595391   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:58.155618   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:58.595206   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:59.094110   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:29:59.594133   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:00.094599   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:00.596054   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:01.094653   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:01.594430   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:02.094500   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:02.595276   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:03.096677   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:03.594521   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:04.095172   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:04.594180   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:05.094159   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:05.595090   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:06.094857   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:06.594667   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:07.097400   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:07.594475   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:08.094719   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:08.595212   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:09.099109   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:09.594164   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:10.094387   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:10.595388   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:11.095203   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:11.594897   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:12.095075   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:12.595047   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:13.093977   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:13.594535   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:14.094733   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:14.594883   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:15.096668   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:15.594974   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:16.095210   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:16.595137   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:17.094271   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:17.594260   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:18.094609   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:18.594722   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:19.095358   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:19.594709   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:20.094810   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:20.594995   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:21.094183   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:21.594801   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:22.094854   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:22.594729   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:23.098880   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:23.594922   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:24.095009   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:24.593804   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:25.095764   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:25.593977   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:26.094805   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:26.596681   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:27.094451   12542 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:30:27.598735   12542 kapi.go:107] duration metric: took 2m24.080373848s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1127 23:30:27.600549   12542 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-052905 cluster.
	I1127 23:30:27.601926   12542 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1127 23:30:27.603485   12542 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1127 23:30:27.605124   12542 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, cloud-spanner, inspektor-gadget, metrics-server, ingress-dns, helm-tiller, nvidia-device-plugin, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1127 23:30:27.606566   12542 addons.go:502] enable addons completed in 2m37.005302831s: enabled=[storage-provisioner default-storageclass cloud-spanner inspektor-gadget metrics-server ingress-dns helm-tiller nvidia-device-plugin volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1127 23:30:27.606602   12542 start.go:233] waiting for cluster config update ...
	I1127 23:30:27.606623   12542 start.go:242] writing updated cluster config ...
	I1127 23:30:27.607091   12542 ssh_runner.go:195] Run: rm -f paused
	I1127 23:30:27.668659   12542 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 23:30:27.670436   12542 out.go:177] * Done! kubectl is now configured to use "addons-052905" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-11-27 23:27:04 UTC, ends at Mon 2023-11-27 23:33:16 UTC. --
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.253664865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701127996253646489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529478,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=46911d56-74c7-419c-8535-d395f2999e41 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.254712148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=55ce2ffd-129d-47ec-84f8-d1603eb1d38b name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.254791593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=55ce2ffd-129d-47ec-84f8-d1603eb1d38b name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.255222212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c4b4e6c4b45209cbb257bd224711bd6c89f07d00d572968741e4523670bd057,PodSandboxId:9ffecb5f638d3d85d84a0a9cf9f962dfc53132bd027270ef1f8f3f3994b238bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701127989453555658,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-zjqnw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a54f779-8ee5-4b20-b427-601aa54526e3,},Annotations:map[string]string{io.kubernetes.container.hash: 4927339b,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0375969e54e84f1bbb1a9f305ffe710abbe759ed9f7cb7c2bb2d2f8f75cfb221,PodSandboxId:fd18bf36d9fae644eea39ef55afdc478e17f63b4b69d1926161ac6c1dbecebd8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701127863470704062,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-l2b9s,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 82ac3f4e-6823-4f96-8216-25c7378b5b94,},An
notations:map[string]string{io.kubernetes.container.hash: 8a378dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada4aed12c7aa0222fd5d76a85be23d045f3cedd75e3044892ecdf5f49444c32,PodSandboxId:df1092579360b1f8c1cfed875eed28a9cf1de5eab5acde88e9616e5cf106f1fb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701127847346331760,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: cab3e3aa-ecb4-4336-8075-199b469e8427,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae3c83b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc84b60dca88d5cdb8d172b3d96c6ff949c4222597ea586f59b4e5cf141549e,PodSandboxId:7f2c125a1ed3d555546b3fc2f96ef29a37e28356704451d217b66142eee76d36,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701127826452604657,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-p5nsf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c1fc73e5-f59c-4cf6-916e-8f483a517706,},Annotations:map[string]string{io.kubernetes.container.hash: 1845f180,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4756efbfd9dce73c81fb6aebdb55d74924e7ce94e41eaf236a6b99e0bed7c0b3,PodSandboxId:5102dfc3257950c35a828faa223f81819e877c712a5d69340c625b98cc1ebe34,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701127748648390566,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xmhml,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2e85764-09dc-4bde-a27b-483f31eac2ba,},Annotations:map[string]string{io.kubernetes.container.hash: a3cbc4e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2ae6f5609a874d532f3576c9f77ec6cb19921265032ad83dd84c6091f3682d,PodSandboxId:f96ab2eb5121b0b23c02b9c8b238a8515575394b08f94a66b7782a4644969ce7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701127737887371155,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hsf6j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c8d9bff2-9499-48bd-9340-1bd546c40ebe,},Annotations:map[string]string{io.kubernetes.container.hash: feddfee2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd479b4f909e83d8840c8838ce2ffa65babab0ccaf22a1ef48a6563e14821ada,PodSandboxId:f2194e21b41ff90f33910df52df358d8fabdd79e15e378e8323f153f00ce5455,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]st
ring{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701127732046307203,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-nmb46,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6e127b7c-4486-4b8b-8547-982efa3dce9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8f7d93,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e5a002149b52f175c6393b2474fb219754e06556875f72896e46b8a3802eb8,PodSandboxId:2179cf2e98c91b036c89c432d08ab1acf2c6cfc794e77e28214f4d7d8f07e293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302
a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701127693491234798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8288233-a03b-4f81-be69-16066955226c,},Annotations:map[string]string{io.kubernetes.container.hash: d82f9af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1593a1f928c501ed44f63ddff917cd4619ab2f79f3be4d8db26e18d8df5b693,PodSandboxId:465ec727becae62b48885097d14a1b434835481a19434cb5f53c1f37d4a42a6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annot
ations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701127684476760962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xph4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df916a46-bee3-44ec-bfec-41ce80b5126f,},Annotations:map[string]string{io.kubernetes.container.hash: 8587fbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3492156356b769b44dca4428af2eb336d512416a47807bde162ccc437964ef,PodSandboxId:935ce5ed738075b873ec8e17b7b10850bad3950a5b550e986f9a818a5ca98212,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:r
egistry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701127676819270624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5kgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d98b2f6-473f-40f9-ba4f-e5f7c166de81,},Annotations:map[string]string{io.kubernetes.container.hash: dcc3b61b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9e9debd4ea86799581c5aa02a4db0431faebd2e47633654ac060e4c326557f,PodSandboxId:27531a9246dca7e627d9123fdf20a79aa99f5ab55679a37d6fd3
abe923c06c38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701127650545776144,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b4e8a1c04188dbc40d5ea12c7399bb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962c6ff31515a9f87cc4e44666fc78164e9a52d6d371bfd8bcdda2ad8f5ccf2e,PodSandboxId:a0975ee8963b7a63d6c461f4e7b7ec20731ca0af7d6385a308b91eaa1812d3ed,Me
tadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701127650451993028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4e04f37daeb4c1f92bacd5d3ddeee9,},Annotations:map[string]string{io.kubernetes.container.hash: 5fbe0aee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6bb8d9c3f9cd62152d2119d0ca668d7a6522825b7eb4b1dd3bab2ad47f40e,PodSandboxId:fd5eeebcac2a733ffb032b48cfcef98245c09d4991208a6c22d5c2624e6979d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701127650262132203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abd61c52b9900a1d40f938e59820c11,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5308d87bc4c887c45161941503108aec9e04ce94bd5eba74838cd4675beafff,PodSandboxId:1e133e90feeba9620ac2bc3412b9ecbda9a5e1782c42e29fc26f530d35825002,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701127650071531515,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a712558459f4bd0fef69967da264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 7f4a15e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=55ce2ffd-129d-47ec-84f8-d1603eb1d38b name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.294763783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=54008cf6-f277-400e-8d09-0e57dd96d43e name=/runtime.v1.RuntimeService/Version
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.294968665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=54008cf6-f277-400e-8d09-0e57dd96d43e name=/runtime.v1.RuntimeService/Version
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.296091137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6385becf-d430-4224-8c8d-2f85beb82ce4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.297384430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701127996297365562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529478,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=6385becf-d430-4224-8c8d-2f85beb82ce4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.297991508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=89c8777b-c296-4fac-9098-81ae51e4d2ce name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.298045392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=89c8777b-c296-4fac-9098-81ae51e4d2ce name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.298323506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c4b4e6c4b45209cbb257bd224711bd6c89f07d00d572968741e4523670bd057,PodSandboxId:9ffecb5f638d3d85d84a0a9cf9f962dfc53132bd027270ef1f8f3f3994b238bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701127989453555658,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-zjqnw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a54f779-8ee5-4b20-b427-601aa54526e3,},Annotations:map[string]string{io.kubernetes.container.hash: 4927339b,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0375969e54e84f1bbb1a9f305ffe710abbe759ed9f7cb7c2bb2d2f8f75cfb221,PodSandboxId:fd18bf36d9fae644eea39ef55afdc478e17f63b4b69d1926161ac6c1dbecebd8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701127863470704062,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-l2b9s,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 82ac3f4e-6823-4f96-8216-25c7378b5b94,},An
notations:map[string]string{io.kubernetes.container.hash: 8a378dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada4aed12c7aa0222fd5d76a85be23d045f3cedd75e3044892ecdf5f49444c32,PodSandboxId:df1092579360b1f8c1cfed875eed28a9cf1de5eab5acde88e9616e5cf106f1fb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701127847346331760,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: cab3e3aa-ecb4-4336-8075-199b469e8427,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae3c83b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc84b60dca88d5cdb8d172b3d96c6ff949c4222597ea586f59b4e5cf141549e,PodSandboxId:7f2c125a1ed3d555546b3fc2f96ef29a37e28356704451d217b66142eee76d36,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701127826452604657,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-p5nsf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c1fc73e5-f59c-4cf6-916e-8f483a517706,},Annotations:map[string]string{io.kubernetes.container.hash: 1845f180,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4756efbfd9dce73c81fb6aebdb55d74924e7ce94e41eaf236a6b99e0bed7c0b3,PodSandboxId:5102dfc3257950c35a828faa223f81819e877c712a5d69340c625b98cc1ebe34,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701127748648390566,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xmhml,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2e85764-09dc-4bde-a27b-483f31eac2ba,},Annotations:map[string]string{io.kubernetes.container.hash: a3cbc4e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2ae6f5609a874d532f3576c9f77ec6cb19921265032ad83dd84c6091f3682d,PodSandboxId:f96ab2eb5121b0b23c02b9c8b238a8515575394b08f94a66b7782a4644969ce7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701127737887371155,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hsf6j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c8d9bff2-9499-48bd-9340-1bd546c40ebe,},Annotations:map[string]string{io.kubernetes.container.hash: feddfee2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd479b4f909e83d8840c8838ce2ffa65babab0ccaf22a1ef48a6563e14821ada,PodSandboxId:f2194e21b41ff90f33910df52df358d8fabdd79e15e378e8323f153f00ce5455,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]st
ring{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701127732046307203,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-nmb46,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6e127b7c-4486-4b8b-8547-982efa3dce9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8f7d93,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e5a002149b52f175c6393b2474fb219754e06556875f72896e46b8a3802eb8,PodSandboxId:2179cf2e98c91b036c89c432d08ab1acf2c6cfc794e77e28214f4d7d8f07e293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302
a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701127693491234798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8288233-a03b-4f81-be69-16066955226c,},Annotations:map[string]string{io.kubernetes.container.hash: d82f9af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1593a1f928c501ed44f63ddff917cd4619ab2f79f3be4d8db26e18d8df5b693,PodSandboxId:465ec727becae62b48885097d14a1b434835481a19434cb5f53c1f37d4a42a6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annot
ations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701127684476760962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xph4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df916a46-bee3-44ec-bfec-41ce80b5126f,},Annotations:map[string]string{io.kubernetes.container.hash: 8587fbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3492156356b769b44dca4428af2eb336d512416a47807bde162ccc437964ef,PodSandboxId:935ce5ed738075b873ec8e17b7b10850bad3950a5b550e986f9a818a5ca98212,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:r
egistry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701127676819270624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5kgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d98b2f6-473f-40f9-ba4f-e5f7c166de81,},Annotations:map[string]string{io.kubernetes.container.hash: dcc3b61b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9e9debd4ea86799581c5aa02a4db0431faebd2e47633654ac060e4c326557f,PodSandboxId:27531a9246dca7e627d9123fdf20a79aa99f5ab55679a37d6fd3
abe923c06c38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701127650545776144,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b4e8a1c04188dbc40d5ea12c7399bb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962c6ff31515a9f87cc4e44666fc78164e9a52d6d371bfd8bcdda2ad8f5ccf2e,PodSandboxId:a0975ee8963b7a63d6c461f4e7b7ec20731ca0af7d6385a308b91eaa1812d3ed,Me
tadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701127650451993028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4e04f37daeb4c1f92bacd5d3ddeee9,},Annotations:map[string]string{io.kubernetes.container.hash: 5fbe0aee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6bb8d9c3f9cd62152d2119d0ca668d7a6522825b7eb4b1dd3bab2ad47f40e,PodSandboxId:fd5eeebcac2a733ffb032b48cfcef98245c09d4991208a6c22d5c2624e6979d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701127650262132203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abd61c52b9900a1d40f938e59820c11,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5308d87bc4c887c45161941503108aec9e04ce94bd5eba74838cd4675beafff,PodSandboxId:1e133e90feeba9620ac2bc3412b9ecbda9a5e1782c42e29fc26f530d35825002,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701127650071531515,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a712558459f4bd0fef69967da264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 7f4a15e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=89c8777b-c296-4fac-9098-81ae51e4d2ce name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.346495123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=81def5d9-238f-444e-a2cc-445f8f41661f name=/runtime.v1.RuntimeService/Version
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.346594004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=81def5d9-238f-444e-a2cc-445f8f41661f name=/runtime.v1.RuntimeService/Version
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.347557525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c9655e80-fa55-4def-aa5d-05b8e71626ae name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.348770986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701127996348750779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529478,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=c9655e80-fa55-4def-aa5d-05b8e71626ae name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.349481527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1dc40b42-d0d1-4cf5-b45e-5faf32daa9f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.349563722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1dc40b42-d0d1-4cf5-b45e-5faf32daa9f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.349842351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c4b4e6c4b45209cbb257bd224711bd6c89f07d00d572968741e4523670bd057,PodSandboxId:9ffecb5f638d3d85d84a0a9cf9f962dfc53132bd027270ef1f8f3f3994b238bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701127989453555658,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-zjqnw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a54f779-8ee5-4b20-b427-601aa54526e3,},Annotations:map[string]string{io.kubernetes.container.hash: 4927339b,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0375969e54e84f1bbb1a9f305ffe710abbe759ed9f7cb7c2bb2d2f8f75cfb221,PodSandboxId:fd18bf36d9fae644eea39ef55afdc478e17f63b4b69d1926161ac6c1dbecebd8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701127863470704062,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-l2b9s,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 82ac3f4e-6823-4f96-8216-25c7378b5b94,},An
notations:map[string]string{io.kubernetes.container.hash: 8a378dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada4aed12c7aa0222fd5d76a85be23d045f3cedd75e3044892ecdf5f49444c32,PodSandboxId:df1092579360b1f8c1cfed875eed28a9cf1de5eab5acde88e9616e5cf106f1fb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701127847346331760,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: cab3e3aa-ecb4-4336-8075-199b469e8427,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae3c83b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc84b60dca88d5cdb8d172b3d96c6ff949c4222597ea586f59b4e5cf141549e,PodSandboxId:7f2c125a1ed3d555546b3fc2f96ef29a37e28356704451d217b66142eee76d36,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701127826452604657,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-p5nsf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c1fc73e5-f59c-4cf6-916e-8f483a517706,},Annotations:map[string]string{io.kubernetes.container.hash: 1845f180,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4756efbfd9dce73c81fb6aebdb55d74924e7ce94e41eaf236a6b99e0bed7c0b3,PodSandboxId:5102dfc3257950c35a828faa223f81819e877c712a5d69340c625b98cc1ebe34,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701127748648390566,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xmhml,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2e85764-09dc-4bde-a27b-483f31eac2ba,},Annotations:map[string]string{io.kubernetes.container.hash: a3cbc4e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2ae6f5609a874d532f3576c9f77ec6cb19921265032ad83dd84c6091f3682d,PodSandboxId:f96ab2eb5121b0b23c02b9c8b238a8515575394b08f94a66b7782a4644969ce7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701127737887371155,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hsf6j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c8d9bff2-9499-48bd-9340-1bd546c40ebe,},Annotations:map[string]string{io.kubernetes.container.hash: feddfee2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd479b4f909e83d8840c8838ce2ffa65babab0ccaf22a1ef48a6563e14821ada,PodSandboxId:f2194e21b41ff90f33910df52df358d8fabdd79e15e378e8323f153f00ce5455,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]st
ring{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701127732046307203,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-nmb46,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6e127b7c-4486-4b8b-8547-982efa3dce9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8f7d93,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e5a002149b52f175c6393b2474fb219754e06556875f72896e46b8a3802eb8,PodSandboxId:2179cf2e98c91b036c89c432d08ab1acf2c6cfc794e77e28214f4d7d8f07e293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302
a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701127693491234798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8288233-a03b-4f81-be69-16066955226c,},Annotations:map[string]string{io.kubernetes.container.hash: d82f9af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1593a1f928c501ed44f63ddff917cd4619ab2f79f3be4d8db26e18d8df5b693,PodSandboxId:465ec727becae62b48885097d14a1b434835481a19434cb5f53c1f37d4a42a6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annot
ations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701127684476760962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xph4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df916a46-bee3-44ec-bfec-41ce80b5126f,},Annotations:map[string]string{io.kubernetes.container.hash: 8587fbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3492156356b769b44dca4428af2eb336d512416a47807bde162ccc437964ef,PodSandboxId:935ce5ed738075b873ec8e17b7b10850bad3950a5b550e986f9a818a5ca98212,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:r
egistry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701127676819270624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5kgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d98b2f6-473f-40f9-ba4f-e5f7c166de81,},Annotations:map[string]string{io.kubernetes.container.hash: dcc3b61b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9e9debd4ea86799581c5aa02a4db0431faebd2e47633654ac060e4c326557f,PodSandboxId:27531a9246dca7e627d9123fdf20a79aa99f5ab55679a37d6fd3
abe923c06c38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701127650545776144,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b4e8a1c04188dbc40d5ea12c7399bb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962c6ff31515a9f87cc4e44666fc78164e9a52d6d371bfd8bcdda2ad8f5ccf2e,PodSandboxId:a0975ee8963b7a63d6c461f4e7b7ec20731ca0af7d6385a308b91eaa1812d3ed,Me
tadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701127650451993028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4e04f37daeb4c1f92bacd5d3ddeee9,},Annotations:map[string]string{io.kubernetes.container.hash: 5fbe0aee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6bb8d9c3f9cd62152d2119d0ca668d7a6522825b7eb4b1dd3bab2ad47f40e,PodSandboxId:fd5eeebcac2a733ffb032b48cfcef98245c09d4991208a6c22d5c2624e6979d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701127650262132203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abd61c52b9900a1d40f938e59820c11,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5308d87bc4c887c45161941503108aec9e04ce94bd5eba74838cd4675beafff,PodSandboxId:1e133e90feeba9620ac2bc3412b9ecbda9a5e1782c42e29fc26f530d35825002,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701127650071531515,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a712558459f4bd0fef69967da264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 7f4a15e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1dc40b42-d0d1-4cf5-b45e-5faf32daa9f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.402069156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0209e6c0-da0f-4898-8016-0a0775f97db2 name=/runtime.v1.RuntimeService/Version
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.402128120Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0209e6c0-da0f-4898-8016-0a0775f97db2 name=/runtime.v1.RuntimeService/Version
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.403588557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=20ef5df5-7027-4a88-9606-4ba94bc78f20 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.404779502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701127996404761651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529478,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=20ef5df5-7027-4a88-9606-4ba94bc78f20 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.406288381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4d563aa3-f3ea-420c-8914-a99f11a65ac8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.406339065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4d563aa3-f3ea-420c-8914-a99f11a65ac8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:33:16 addons-052905 crio[717]: time="2023-11-27 23:33:16.406642027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c4b4e6c4b45209cbb257bd224711bd6c89f07d00d572968741e4523670bd057,PodSandboxId:9ffecb5f638d3d85d84a0a9cf9f962dfc53132bd027270ef1f8f3f3994b238bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701127989453555658,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-zjqnw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a54f779-8ee5-4b20-b427-601aa54526e3,},Annotations:map[string]string{io.kubernetes.container.hash: 4927339b,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0375969e54e84f1bbb1a9f305ffe710abbe759ed9f7cb7c2bb2d2f8f75cfb221,PodSandboxId:fd18bf36d9fae644eea39ef55afdc478e17f63b4b69d1926161ac6c1dbecebd8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701127863470704062,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-l2b9s,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 82ac3f4e-6823-4f96-8216-25c7378b5b94,},An
notations:map[string]string{io.kubernetes.container.hash: 8a378dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada4aed12c7aa0222fd5d76a85be23d045f3cedd75e3044892ecdf5f49444c32,PodSandboxId:df1092579360b1f8c1cfed875eed28a9cf1de5eab5acde88e9616e5cf106f1fb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701127847346331760,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: cab3e3aa-ecb4-4336-8075-199b469e8427,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae3c83b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc84b60dca88d5cdb8d172b3d96c6ff949c4222597ea586f59b4e5cf141549e,PodSandboxId:7f2c125a1ed3d555546b3fc2f96ef29a37e28356704451d217b66142eee76d36,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701127826452604657,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-p5nsf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c1fc73e5-f59c-4cf6-916e-8f483a517706,},Annotations:map[string]string{io.kubernetes.container.hash: 1845f180,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4756efbfd9dce73c81fb6aebdb55d74924e7ce94e41eaf236a6b99e0bed7c0b3,PodSandboxId:5102dfc3257950c35a828faa223f81819e877c712a5d69340c625b98cc1ebe34,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701127748648390566,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xmhml,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e2e85764-09dc-4bde-a27b-483f31eac2ba,},Annotations:map[string]string{io.kubernetes.container.hash: a3cbc4e5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2ae6f5609a874d532f3576c9f77ec6cb19921265032ad83dd84c6091f3682d,PodSandboxId:f96ab2eb5121b0b23c02b9c8b238a8515575394b08f94a66b7782a4644969ce7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certg
en@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701127737887371155,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hsf6j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c8d9bff2-9499-48bd-9340-1bd546c40ebe,},Annotations:map[string]string{io.kubernetes.container.hash: feddfee2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd479b4f909e83d8840c8838ce2ffa65babab0ccaf22a1ef48a6563e14821ada,PodSandboxId:f2194e21b41ff90f33910df52df358d8fabdd79e15e378e8323f153f00ce5455,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]st
ring{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701127732046307203,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-nmb46,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 6e127b7c-4486-4b8b-8547-982efa3dce9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8f7d93,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e5a002149b52f175c6393b2474fb219754e06556875f72896e46b8a3802eb8,PodSandboxId:2179cf2e98c91b036c89c432d08ab1acf2c6cfc794e77e28214f4d7d8f07e293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302
a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701127693491234798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8288233-a03b-4f81-be69-16066955226c,},Annotations:map[string]string{io.kubernetes.container.hash: d82f9af3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1593a1f928c501ed44f63ddff917cd4619ab2f79f3be4d8db26e18d8df5b693,PodSandboxId:465ec727becae62b48885097d14a1b434835481a19434cb5f53c1f37d4a42a6b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annot
ations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701127684476760962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4xph4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df916a46-bee3-44ec-bfec-41ce80b5126f,},Annotations:map[string]string{io.kubernetes.container.hash: 8587fbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3492156356b769b44dca4428af2eb336d512416a47807bde162ccc437964ef,PodSandboxId:935ce5ed738075b873ec8e17b7b10850bad3950a5b550e986f9a818a5ca98212,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:r
egistry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701127676819270624,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5kgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d98b2f6-473f-40f9-ba4f-e5f7c166de81,},Annotations:map[string]string{io.kubernetes.container.hash: dcc3b61b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9e9debd4ea86799581c5aa02a4db0431faebd2e47633654ac060e4c326557f,PodSandboxId:27531a9246dca7e627d9123fdf20a79aa99f5ab55679a37d6fd3
abe923c06c38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701127650545776144,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08b4e8a1c04188dbc40d5ea12c7399bb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962c6ff31515a9f87cc4e44666fc78164e9a52d6d371bfd8bcdda2ad8f5ccf2e,PodSandboxId:a0975ee8963b7a63d6c461f4e7b7ec20731ca0af7d6385a308b91eaa1812d3ed,Me
tadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701127650451993028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4e04f37daeb4c1f92bacd5d3ddeee9,},Annotations:map[string]string{io.kubernetes.container.hash: 5fbe0aee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b6bb8d9c3f9cd62152d2119d0ca668d7a6522825b7eb4b1dd3bab2ad47f40e,PodSandboxId:fd5eeebcac2a733ffb032b48cfcef98245c09d4991208a6c22d5c2624e6979d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701127650262132203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4abd61c52b9900a1d40f938e59820c11,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5308d87bc4c887c45161941503108aec9e04ce94bd5eba74838cd4675beafff,PodSandboxId:1e133e90feeba9620ac2bc3412b9ecbda9a5e1782c42e29fc26f530d35825002,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701127650071531515,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-052905,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a712558459f4bd0fef69967da264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 7f4a15e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4d563aa3-f3ea-420c-8914-a99f11a65ac8 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c4b4e6c4b452       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   9ffecb5f638d3       hello-world-app-5d77478584-zjqnw
	0375969e54e84       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   fd18bf36d9fae       headlamp-777fd4b855-l2b9s
	ada4aed12c7aa       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   df1092579360b       nginx
	4bc84b60dca88       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   7f2c125a1ed3d       gcp-auth-d4c87556c-p5nsf
	4756efbfd9dce       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              patch                     0                   5102dfc325795       ingress-nginx-admission-patch-xmhml
	6c2ae6f5609a8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   f96ab2eb5121b       ingress-nginx-admission-create-hsf6j
	bd479b4f909e8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   f2194e21b41ff       local-path-provisioner-78b46b4d5c-nmb46
	52e5a002149b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   2179cf2e98c91       storage-provisioner
	e1593a1f928c5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   465ec727becae       kube-proxy-4xph4
	3c3492156356b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   935ce5ed73807       coredns-5dd5756b68-5kgzx
	4e9e9debd4ea8       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   27531a9246dca       kube-scheduler-addons-052905
	962c6ff31515a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   a0975ee8963b7       etcd-addons-052905
	21b6bb8d9c3f9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             5 minutes ago       Running             kube-controller-manager   0                   fd5eeebcac2a7       kube-controller-manager-addons-052905
	c5308d87bc4c8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   1e133e90feeba       kube-apiserver-addons-052905
	
	* 
	* ==> coredns [3c3492156356b769b44dca4428af2eb336d512416a47807bde162ccc437964ef] <==
	* [INFO] 10.244.0.9:38948 - 45803 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000126262s
	[INFO] 10.244.0.9:47255 - 26982 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000316535s
	[INFO] 10.244.0.9:47255 - 30820 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000371484s
	[INFO] 10.244.0.9:42423 - 59179 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098251s
	[INFO] 10.244.0.9:42423 - 51753 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085524s
	[INFO] 10.244.0.9:37801 - 7759 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000215294s
	[INFO] 10.244.0.9:37801 - 61769 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099091s
	[INFO] 10.244.0.9:56299 - 5083 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009862s
	[INFO] 10.244.0.9:56299 - 212 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00006051s
	[INFO] 10.244.0.9:47163 - 38873 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068795s
	[INFO] 10.244.0.9:47163 - 35815 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147159s
	[INFO] 10.244.0.9:54253 - 23800 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033839s
	[INFO] 10.244.0.9:54253 - 48890 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051132s
	[INFO] 10.244.0.9:43240 - 42743 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035313s
	[INFO] 10.244.0.9:43240 - 57334 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054588s
	[INFO] 10.244.0.21:54637 - 20491 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00065479s
	[INFO] 10.244.0.21:59372 - 59282 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112204s
	[INFO] 10.244.0.21:48890 - 43805 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010878s
	[INFO] 10.244.0.21:39459 - 14577 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095038s
	[INFO] 10.244.0.21:40428 - 14201 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140485s
	[INFO] 10.244.0.21:48408 - 62835 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101822s
	[INFO] 10.244.0.21:50342 - 50635 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000575587s
	[INFO] 10.244.0.21:41122 - 60254 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001042662s
	[INFO] 10.244.0.24:52075 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000240303s
	[INFO] 10.244.0.24:38126 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148774s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-052905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-052905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=addons-052905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_27_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-052905
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:27:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-052905
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:33:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:31:43 +0000   Mon, 27 Nov 2023 23:27:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:31:43 +0000   Mon, 27 Nov 2023 23:27:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:31:43 +0000   Mon, 27 Nov 2023 23:27:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:31:43 +0000   Mon, 27 Nov 2023 23:27:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    addons-052905
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 74f24601565944bab043ace6b4c5f407
	  System UUID:                74f24601-5659-44ba-b043-ace6b4c5f407
	  Boot ID:                    d061bc8b-93c6-4b9d-92f1-eaccbf098d55
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zjqnw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  gcp-auth                    gcp-auth-d4c87556c-p5nsf                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  headlamp                    headlamp-777fd4b855-l2b9s                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 coredns-5dd5756b68-5kgzx                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m26s
	  kube-system                 etcd-addons-052905                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-apiserver-addons-052905               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 kube-controller-manager-addons-052905      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-proxy-4xph4                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-scheduler-addons-052905               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  local-path-storage          local-path-provisioner-78b46b4d5c-nmb46    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m47s (x8 over 5m47s)  kubelet          Node addons-052905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s (x8 over 5m47s)  kubelet          Node addons-052905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x7 over 5m47s)  kubelet          Node addons-052905 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m38s                  kubelet          Node addons-052905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s                  kubelet          Node addons-052905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s                  kubelet          Node addons-052905 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m38s                  kubelet          Node addons-052905 status is now: NodeReady
	  Normal  RegisteredNode           5m27s                  node-controller  Node addons-052905 event: Registered Node addons-052905 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.443518] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Nov27 23:27] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146350] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.006805] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.778364] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.112350] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.143639] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.111537] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.205415] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[  +9.132397] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +9.253875] systemd-fstab-generator[1249]: Ignoring "noauto" for root device
	[Nov27 23:28] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.223455] kauditd_printk_skb: 26 callbacks suppressed
	[ +28.225738] kauditd_printk_skb: 16 callbacks suppressed
	[Nov27 23:29] kauditd_printk_skb: 4 callbacks suppressed
	[Nov27 23:30] kauditd_printk_skb: 16 callbacks suppressed
	[ +14.944469] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.099830] kauditd_printk_skb: 13 callbacks suppressed
	[Nov27 23:31] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.439853] kauditd_printk_skb: 7 callbacks suppressed
	[Nov27 23:33] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.236504] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [962c6ff31515a9f87cc4e44666fc78164e9a52d6d371bfd8bcdda2ad8f5ccf2e] <==
	* {"level":"info","ts":"2023-11-27T23:29:15.529697Z","caller":"traceutil/trace.go:171","msg":"trace[704387759] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1105; }","duration":"442.282732ms","start":"2023-11-27T23:29:15.087406Z","end":"2023-11-27T23:29:15.529689Z","steps":["trace[704387759] 'agreement among raft nodes before linearized reading'  (duration: 442.196965ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:29:15.529739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T23:29:15.087391Z","time spent":"442.342578ms","remote":"127.0.0.1:55128","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10975,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2023-11-27T23:29:15.530042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.562835ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82382"}
	{"level":"info","ts":"2023-11-27T23:29:15.530155Z","caller":"traceutil/trace.go:171","msg":"trace[1412840509] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1105; }","duration":"290.678186ms","start":"2023-11-27T23:29:15.239469Z","end":"2023-11-27T23:29:15.530148Z","steps":["trace[1412840509] 'agreement among raft nodes before linearized reading'  (duration: 290.375373ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:29:15.53047Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.308254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-11-27T23:29:15.530521Z","caller":"traceutil/trace.go:171","msg":"trace[1810239104] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:1105; }","duration":"122.365527ms","start":"2023-11-27T23:29:15.408149Z","end":"2023-11-27T23:29:15.530515Z","steps":["trace[1810239104] 'agreement among raft nodes before linearized reading'  (duration: 122.298052ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:29:15.530695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.127353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-11-27T23:29:15.530739Z","caller":"traceutil/trace.go:171","msg":"trace[723639642] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1105; }","duration":"258.173726ms","start":"2023-11-27T23:29:15.27256Z","end":"2023-11-27T23:29:15.530733Z","steps":["trace[723639642] 'agreement among raft nodes before linearized reading'  (duration: 258.115099ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:29:19.287076Z","caller":"traceutil/trace.go:171","msg":"trace[893857166] linearizableReadLoop","detail":"{readStateIndex:1151; appliedIndex:1150; }","duration":"336.0793ms","start":"2023-11-27T23:29:18.950984Z","end":"2023-11-27T23:29:19.287063Z","steps":["trace[893857166] 'read index received'  (duration: 335.786847ms)","trace[893857166] 'applied index is now lower than readState.Index'  (duration: 291.853µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-27T23:29:19.287346Z","caller":"traceutil/trace.go:171","msg":"trace[550024323] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"442.467766ms","start":"2023-11-27T23:29:18.844868Z","end":"2023-11-27T23:29:19.287336Z","steps":["trace[550024323] 'process raft request'  (duration: 441.95406ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:29:19.287423Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T23:29:18.844852Z","time spent":"442.518714ms","remote":"127.0.0.1:55150","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1104 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2023-11-27T23:29:19.287554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.589228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2023-11-27T23:29:19.287572Z","caller":"traceutil/trace.go:171","msg":"trace[20063486] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1115; }","duration":"336.608727ms","start":"2023-11-27T23:29:18.950958Z","end":"2023-11-27T23:29:19.287566Z","steps":["trace[20063486] 'agreement among raft nodes before linearized reading'  (duration: 336.548262ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:29:19.287586Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T23:29:18.950942Z","time spent":"336.640299ms","remote":"127.0.0.1:55128","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13888,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2023-11-27T23:29:19.28768Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.084933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-7c6974c4d8-mqwsv\" ","response":"range_response_count:1 size:5658"}
	{"level":"info","ts":"2023-11-27T23:29:19.287696Z","caller":"traceutil/trace.go:171","msg":"trace[215425690] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-controller-7c6974c4d8-mqwsv; range_end:; response_count:1; response_revision:1115; }","duration":"268.097736ms","start":"2023-11-27T23:29:19.019591Z","end":"2023-11-27T23:29:19.287689Z","steps":["trace[215425690] 'agreement among raft nodes before linearized reading'  (duration: 268.072047ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:29:19.288095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.775447ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10951"}
	{"level":"info","ts":"2023-11-27T23:29:19.28812Z","caller":"traceutil/trace.go:171","msg":"trace[1157908086] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1115; }","duration":"200.805372ms","start":"2023-11-27T23:29:19.087308Z","end":"2023-11-27T23:29:19.288113Z","steps":["trace[1157908086] 'agreement among raft nodes before linearized reading'  (duration: 200.749717ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:29:58.145737Z","caller":"traceutil/trace.go:171","msg":"trace[1236031760] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"229.008373ms","start":"2023-11-27T23:29:57.91669Z","end":"2023-11-27T23:29:58.145698Z","steps":["trace[1236031760] 'process raft request'  (duration: 228.577168ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:31:00.235812Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.402139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:9140"}
	{"level":"info","ts":"2023-11-27T23:31:00.2361Z","caller":"traceutil/trace.go:171","msg":"trace[1582713015] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1560; }","duration":"127.70877ms","start":"2023-11-27T23:31:00.108377Z","end":"2023-11-27T23:31:00.236086Z","steps":["trace[1582713015] 'range keys from in-memory index tree'  (duration: 127.27726ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:31:00.236114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.544725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2023-11-27T23:31:00.236286Z","caller":"traceutil/trace.go:171","msg":"trace[996049476] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1560; }","duration":"131.720452ms","start":"2023-11-27T23:31:00.104556Z","end":"2023-11-27T23:31:00.236276Z","steps":["trace[996049476] 'range keys from in-memory index tree'  (duration: 131.476477ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-27T23:31:19.15965Z","caller":"traceutil/trace.go:171","msg":"trace[101993931] transaction","detail":"{read_only:false; response_revision:1738; number_of_response:1; }","duration":"498.291548ms","start":"2023-11-27T23:31:18.661331Z","end":"2023-11-27T23:31:19.159623Z","steps":["trace[101993931] 'process raft request'  (duration: 498.188297ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-27T23:31:19.160528Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-27T23:31:18.661319Z","time spent":"499.006329ms","remote":"127.0.0.1:55124","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1737 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> gcp-auth [4bc84b60dca88d5cdb8d172b3d96c6ff949c4222597ea586f59b4e5cf141549e] <==
	* 2023/11/27 23:30:26 GCP Auth Webhook started!
	2023/11/27 23:30:32 Ready to marshal response ...
	2023/11/27 23:30:32 Ready to write response ...
	2023/11/27 23:30:35 Ready to marshal response ...
	2023/11/27 23:30:35 Ready to write response ...
	2023/11/27 23:30:37 Ready to marshal response ...
	2023/11/27 23:30:37 Ready to write response ...
	2023/11/27 23:30:51 Ready to marshal response ...
	2023/11/27 23:30:51 Ready to write response ...
	2023/11/27 23:30:51 Ready to marshal response ...
	2023/11/27 23:30:51 Ready to write response ...
	2023/11/27 23:30:54 Ready to marshal response ...
	2023/11/27 23:30:54 Ready to write response ...
	2023/11/27 23:30:54 Ready to marshal response ...
	2023/11/27 23:30:54 Ready to write response ...
	2023/11/27 23:30:54 Ready to marshal response ...
	2023/11/27 23:30:54 Ready to write response ...
	2023/11/27 23:30:54 Ready to marshal response ...
	2023/11/27 23:30:54 Ready to write response ...
	2023/11/27 23:31:08 Ready to marshal response ...
	2023/11/27 23:31:08 Ready to write response ...
	2023/11/27 23:31:14 Ready to marshal response ...
	2023/11/27 23:31:14 Ready to write response ...
	2023/11/27 23:33:05 Ready to marshal response ...
	2023/11/27 23:33:05 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:33:16 up 6 min,  0 users,  load average: 0.89, 1.65, 0.98
	Linux addons-052905 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c5308d87bc4c887c45161941503108aec9e04ce94bd5eba74838cd4675beafff] <==
	* I1127 23:30:54.573963       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.85.208"}
	I1127 23:31:13.732109       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:31:13.732277       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:31:13.748820       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:31:13.748964       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:31:13.769599       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:31:13.769698       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:31:13.784742       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:31:13.784815       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:31:13.797547       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:31:13.798016       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:31:13.811769       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:31:13.811864       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:31:13.840144       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:31:13.840213       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:31:13.840647       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:31:13.840712       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1127 23:31:14.799134       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1127 23:31:14.840991       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1127 23:31:14.865089       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1127 23:31:19.161712       1 trace.go:236] Trace[1875801109]: "Update" accept:application/json, */*,audit-id:2e5a14d3-8b2b-4ee3-b15a-49a0aeac543a,client:192.168.39.221,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (27-Nov-2023 23:31:18.660) (total time: 501ms):
	Trace[1875801109]: ["GuaranteedUpdate etcd3" audit-id:2e5a14d3-8b2b-4ee3-b15a-49a0aeac543a,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 501ms (23:31:18.660)
	Trace[1875801109]:  ---"Txn call completed" 500ms (23:31:19.161)]
	Trace[1875801109]: [501.586451ms] [501.586451ms] END
	I1127 23:33:06.202342       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.155.31"}
	
	* 
	* ==> kube-controller-manager [21b6bb8d9c3f9cd62152d2119d0ca668d7a6522825b7eb4b1dd3bab2ad47f40e] <==
	* W1127 23:31:56.793591       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:31:56.793674       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:32:19.744381       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:32:19.744499       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:32:23.736799       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:32:23.736971       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:32:37.655714       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:32:37.655767       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:32:43.848553       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:32:43.848648       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:32:57.790703       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:32:57.790781       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 23:33:05.967746       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1127 23:33:06.020412       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-zjqnw"
	I1127 23:33:06.041438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.948081ms"
	I1127 23:33:06.073586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="32.041518ms"
	I1127 23:33:06.109121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="35.419893ms"
	I1127 23:33:06.109241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="58.803µs"
	W1127 23:33:06.433708       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:33:06.433779       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 23:33:08.345022       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1127 23:33:08.349415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="3.474µs"
	I1127 23:33:08.355133       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1127 23:33:09.635227       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="8.176449ms"
	I1127 23:33:09.635504       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.624µs"
	
	* 
	* ==> kube-proxy [e1593a1f928c501ed44f63ddff917cd4619ab2f79f3be4d8db26e18d8df5b693] <==
	* I1127 23:28:05.952727       1 server_others.go:69] "Using iptables proxy"
	I1127 23:28:05.963068       1 node.go:141] Successfully retrieved node IP: 192.168.39.221
	I1127 23:28:06.361475       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1127 23:28:06.361521       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1127 23:28:06.402429       1 server_others.go:152] "Using iptables Proxier"
	I1127 23:28:06.402497       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 23:28:06.402646       1 server.go:846] "Version info" version="v1.28.4"
	I1127 23:28:06.402655       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 23:28:06.404008       1 config.go:188] "Starting service config controller"
	I1127 23:28:06.404030       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 23:28:06.404051       1 config.go:97] "Starting endpoint slice config controller"
	I1127 23:28:06.404055       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 23:28:06.404471       1 config.go:315] "Starting node config controller"
	I1127 23:28:06.404477       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 23:28:06.506420       1 shared_informer.go:318] Caches are synced for node config
	I1127 23:28:06.506467       1 shared_informer.go:318] Caches are synced for service config
	I1127 23:28:06.506490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [4e9e9debd4ea86799581c5aa02a4db0431faebd2e47633654ac060e4c326557f] <==
	* W1127 23:27:34.667832       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:27:34.668389       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1127 23:27:34.668000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 23:27:34.668477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 23:27:34.668035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:27:34.668489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1127 23:27:34.668069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 23:27:34.668499       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1127 23:27:34.668150       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:27:34.668627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1127 23:27:34.677863       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:27:34.677985       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1127 23:27:35.525558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:27:35.525582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1127 23:27:35.686535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 23:27:35.686601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 23:27:35.820094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:27:35.820146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1127 23:27:35.872537       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:27:35.872590       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1127 23:27:35.877063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:27:35.877112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1127 23:27:35.893529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 23:27:35.893576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1127 23:27:38.450127       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-11-27 23:27:04 UTC, ends at Mon 2023-11-27 23:33:16 UTC. --
	Nov 27 23:33:06 addons-052905 kubelet[1256]: E1127 23:33:06.021508    1256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="781af86d-bee0-4262-b8ed-23a3428f65ac" containerName="helm-test"
	Nov 27 23:33:06 addons-052905 kubelet[1256]: E1127 23:33:06.021533    1256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52ea092e-2863-40a3-9738-710b1a17e38a" containerName="tiller"
	Nov 27 23:33:06 addons-052905 kubelet[1256]: I1127 23:33:06.021587    1256 memory_manager.go:346] "RemoveStaleState removing state" podUID="52ea092e-2863-40a3-9738-710b1a17e38a" containerName="tiller"
	Nov 27 23:33:06 addons-052905 kubelet[1256]: I1127 23:33:06.021594    1256 memory_manager.go:346] "RemoveStaleState removing state" podUID="781af86d-bee0-4262-b8ed-23a3428f65ac" containerName="helm-test"
	Nov 27 23:33:06 addons-052905 kubelet[1256]: I1127 23:33:06.153656    1256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wfm\" (UniqueName: \"kubernetes.io/projected/7a54f779-8ee5-4b20-b427-601aa54526e3-kube-api-access-55wfm\") pod \"hello-world-app-5d77478584-zjqnw\" (UID: \"7a54f779-8ee5-4b20-b427-601aa54526e3\") " pod="default/hello-world-app-5d77478584-zjqnw"
	Nov 27 23:33:06 addons-052905 kubelet[1256]: I1127 23:33:06.153704    1256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7a54f779-8ee5-4b20-b427-601aa54526e3-gcp-creds\") pod \"hello-world-app-5d77478584-zjqnw\" (UID: \"7a54f779-8ee5-4b20-b427-601aa54526e3\") " pod="default/hello-world-app-5d77478584-zjqnw"
	Nov 27 23:33:07 addons-052905 kubelet[1256]: I1127 23:33:07.362629    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj6ng\" (UniqueName: \"kubernetes.io/projected/cc23eea2-e3e2-4597-95d0-bd6b89799a99-kube-api-access-qj6ng\") pod \"cc23eea2-e3e2-4597-95d0-bd6b89799a99\" (UID: \"cc23eea2-e3e2-4597-95d0-bd6b89799a99\") "
	Nov 27 23:33:07 addons-052905 kubelet[1256]: I1127 23:33:07.370043    1256 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc23eea2-e3e2-4597-95d0-bd6b89799a99-kube-api-access-qj6ng" (OuterVolumeSpecName: "kube-api-access-qj6ng") pod "cc23eea2-e3e2-4597-95d0-bd6b89799a99" (UID: "cc23eea2-e3e2-4597-95d0-bd6b89799a99"). InnerVolumeSpecName "kube-api-access-qj6ng". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 27 23:33:07 addons-052905 kubelet[1256]: I1127 23:33:07.463687    1256 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qj6ng\" (UniqueName: \"kubernetes.io/projected/cc23eea2-e3e2-4597-95d0-bd6b89799a99-kube-api-access-qj6ng\") on node \"addons-052905\" DevicePath \"\""
	Nov 27 23:33:07 addons-052905 kubelet[1256]: I1127 23:33:07.589964    1256 scope.go:117] "RemoveContainer" containerID="0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94"
	Nov 27 23:33:07 addons-052905 kubelet[1256]: I1127 23:33:07.636565    1256 scope.go:117] "RemoveContainer" containerID="0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94"
	Nov 27 23:33:07 addons-052905 kubelet[1256]: E1127 23:33:07.637294    1256 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94\": container with ID starting with 0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94 not found: ID does not exist" containerID="0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94"
	Nov 27 23:33:07 addons-052905 kubelet[1256]: I1127 23:33:07.637346    1256 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94"} err="failed to get container status \"0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94\": rpc error: code = NotFound desc = could not find container \"0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94\": container with ID starting with 0745ba587b337a0254b55d0ebf03a4b22224bf32c4fec75ac75eab6752edbc94 not found: ID does not exist"
	Nov 27 23:33:07 addons-052905 kubelet[1256]: I1127 23:33:07.980719    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cc23eea2-e3e2-4597-95d0-bd6b89799a99" path="/var/lib/kubelet/pods/cc23eea2-e3e2-4597-95d0-bd6b89799a99/volumes"
	Nov 27 23:33:09 addons-052905 kubelet[1256]: I1127 23:33:09.626191    1256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-zjqnw" podStartSLOduration=1.406427133 podCreationTimestamp="2023-11-27 23:33:06 +0000 UTC" firstStartedPulling="2023-11-27 23:33:07.206048548 +0000 UTC m=+329.436834468" lastFinishedPulling="2023-11-27 23:33:09.42573215 +0000 UTC m=+331.656518070" observedRunningTime="2023-11-27 23:33:09.62464529 +0000 UTC m=+331.855431229" watchObservedRunningTime="2023-11-27 23:33:09.626110735 +0000 UTC m=+331.856896677"
	Nov 27 23:33:09 addons-052905 kubelet[1256]: I1127 23:33:09.978267    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c8d9bff2-9499-48bd-9340-1bd546c40ebe" path="/var/lib/kubelet/pods/c8d9bff2-9499-48bd-9340-1bd546c40ebe/volumes"
	Nov 27 23:33:09 addons-052905 kubelet[1256]: I1127 23:33:09.978645    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e2e85764-09dc-4bde-a27b-483f31eac2ba" path="/var/lib/kubelet/pods/e2e85764-09dc-4bde-a27b-483f31eac2ba/volumes"
	Nov 27 23:33:11 addons-052905 kubelet[1256]: I1127 23:33:11.700432    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b24zc\" (UniqueName: \"kubernetes.io/projected/430a43e3-da79-44af-b4ed-b5e73e6a75d0-kube-api-access-b24zc\") pod \"430a43e3-da79-44af-b4ed-b5e73e6a75d0\" (UID: \"430a43e3-da79-44af-b4ed-b5e73e6a75d0\") "
	Nov 27 23:33:11 addons-052905 kubelet[1256]: I1127 23:33:11.700499    1256 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/430a43e3-da79-44af-b4ed-b5e73e6a75d0-webhook-cert\") pod \"430a43e3-da79-44af-b4ed-b5e73e6a75d0\" (UID: \"430a43e3-da79-44af-b4ed-b5e73e6a75d0\") "
	Nov 27 23:33:11 addons-052905 kubelet[1256]: I1127 23:33:11.710372    1256 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430a43e3-da79-44af-b4ed-b5e73e6a75d0-kube-api-access-b24zc" (OuterVolumeSpecName: "kube-api-access-b24zc") pod "430a43e3-da79-44af-b4ed-b5e73e6a75d0" (UID: "430a43e3-da79-44af-b4ed-b5e73e6a75d0"). InnerVolumeSpecName "kube-api-access-b24zc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 27 23:33:11 addons-052905 kubelet[1256]: I1127 23:33:11.711435    1256 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430a43e3-da79-44af-b4ed-b5e73e6a75d0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "430a43e3-da79-44af-b4ed-b5e73e6a75d0" (UID: "430a43e3-da79-44af-b4ed-b5e73e6a75d0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:33:11 addons-052905 kubelet[1256]: I1127 23:33:11.801405    1256 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/430a43e3-da79-44af-b4ed-b5e73e6a75d0-webhook-cert\") on node \"addons-052905\" DevicePath \"\""
	Nov 27 23:33:11 addons-052905 kubelet[1256]: I1127 23:33:11.801444    1256 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-b24zc\" (UniqueName: \"kubernetes.io/projected/430a43e3-da79-44af-b4ed-b5e73e6a75d0-kube-api-access-b24zc\") on node \"addons-052905\" DevicePath \"\""
	Nov 27 23:33:11 addons-052905 kubelet[1256]: I1127 23:33:11.977879    1256 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="430a43e3-da79-44af-b4ed-b5e73e6a75d0" path="/var/lib/kubelet/pods/430a43e3-da79-44af-b4ed-b5e73e6a75d0/volumes"
	Nov 27 23:33:12 addons-052905 kubelet[1256]: I1127 23:33:12.636135    1256 scope.go:117] "RemoveContainer" containerID="dace463a52287012ad0209de35145cc613dd815c5b38adc50dd076b5d1d82c15"
	
	* 
	* ==> storage-provisioner [52e5a002149b52f175c6393b2474fb219754e06556875f72896e46b8a3802eb8] <==
	* I1127 23:28:15.710290       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 23:28:15.740703       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 23:28:15.741358       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 23:28:15.759273       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 23:28:15.759316       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef9e73c3-d294-4535-a84a-66d767128e15", APIVersion:"v1", ResourceVersion:"875", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-052905_cc760595-84d5-4af3-91c2-0a1872058403 became leader
	I1127 23:28:15.759467       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-052905_cc760595-84d5-4af3-91c2-0a1872058403!
	I1127 23:28:15.862464       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-052905_cc760595-84d5-4af3-91c2-0a1872058403!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-052905 -n addons-052905
helpers_test.go:261: (dbg) Run:  kubectl --context addons-052905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (163.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-052905
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-052905: exit status 82 (2m1.602144715s)

                                                
                                                
-- stdout --
	* Stopping node "addons-052905"  ...
	* Stopping node "addons-052905"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-052905" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-052905
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-052905: exit status 11 (21.513200004s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-052905" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-052905
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-052905: exit status 11 (6.14344105s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-052905" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-052905
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-052905: exit status 11 (6.142851541s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-052905" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-142525 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-142525 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.894170259s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-142525 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-142525 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d9b9e0c6-e0a3-493d-865c-46132cac0178] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d9b9e0c6-e0a3-493d-865c-46132cac0178] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 13.013955576s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-142525 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1127 23:48:50.988079   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:50.993313   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:51.003576   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:51.023819   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:51.064103   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:51.144449   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:51.304845   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:51.625424   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:52.266337   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:53.546494   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:48:56.107506   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:49:01.228099   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:49:11.469247   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:49:31.950118   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-142525 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.851815363s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-142525 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-142525 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.57
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-142525 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-142525 addons disable ingress-dns --alsologtostderr -v=1: (8.924574133s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-142525 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-142525 addons disable ingress --alsologtostderr -v=1: (7.563084794s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-142525 -n ingress-addon-legacy-142525
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-142525 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-142525 logs -n 25: (1.17987035s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-004462 ssh findmnt                                          | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-004462                                                   | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3200424757/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-004462                                                   | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3200424757/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| update-context | functional-004462                                                      | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-004462                                                      | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-004462                                                      | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-004462                                                      | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-004462 ssh findmnt                                          | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-004462 ssh findmnt                                          | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| image          | functional-004462                                                      | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-004462 ssh findmnt                                          | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| ssh            | functional-004462 ssh pgrep                                            | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| mount          | -p functional-004462                                                   | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| image          | functional-004462 image build -t                                       | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | localhost/my-image:functional-004462                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-004462                                                      | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-004462                                                      | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-004462 image ls                                             | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	| delete         | -p functional-004462                                                   | functional-004462           | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:44 UTC |
	| start          | -p ingress-addon-legacy-142525                                         | ingress-addon-legacy-142525 | jenkins | v1.32.0 | 27 Nov 23 23:44 UTC | 27 Nov 23 23:46 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-142525                                            | ingress-addon-legacy-142525 | jenkins | v1.32.0 | 27 Nov 23 23:46 UTC | 27 Nov 23 23:46 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-142525                                            | ingress-addon-legacy-142525 | jenkins | v1.32.0 | 27 Nov 23 23:46 UTC | 27 Nov 23 23:46 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-142525                                            | ingress-addon-legacy-142525 | jenkins | v1.32.0 | 27 Nov 23 23:47 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-142525 ip                                         | ingress-addon-legacy-142525 | jenkins | v1.32.0 | 27 Nov 23 23:49 UTC | 27 Nov 23 23:49 UTC |
	| addons         | ingress-addon-legacy-142525                                            | ingress-addon-legacy-142525 | jenkins | v1.32.0 | 27 Nov 23 23:49 UTC | 27 Nov 23 23:49 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-142525                                            | ingress-addon-legacy-142525 | jenkins | v1.32.0 | 27 Nov 23 23:49 UTC | 27 Nov 23 23:49 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:44:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:44:47.128836   21050 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:44:47.129009   21050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:44:47.129022   21050 out.go:309] Setting ErrFile to fd 2...
	I1127 23:44:47.129030   21050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:44:47.129241   21050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1127 23:44:47.129828   21050 out.go:303] Setting JSON to false
	I1127 23:44:47.130649   21050 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1634,"bootTime":1701127053,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:44:47.130738   21050 start.go:138] virtualization: kvm guest
	I1127 23:44:47.133303   21050 out.go:177] * [ingress-addon-legacy-142525] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:44:47.135015   21050 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:44:47.135039   21050 notify.go:220] Checking for updates...
	I1127 23:44:47.138351   21050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:44:47.140082   21050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:44:47.141593   21050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:44:47.143243   21050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:44:47.145007   21050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:44:47.146702   21050 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:44:47.180828   21050 out.go:177] * Using the kvm2 driver based on user configuration
	I1127 23:44:47.182529   21050 start.go:298] selected driver: kvm2
	I1127 23:44:47.182545   21050 start.go:902] validating driver "kvm2" against <nil>
	I1127 23:44:47.182554   21050 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:44:47.183213   21050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:44:47.183273   21050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 23:44:47.197099   21050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 23:44:47.197141   21050 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:44:47.197330   21050 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:44:47.197360   21050 cni.go:84] Creating CNI manager for ""
	I1127 23:44:47.197372   21050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:44:47.197385   21050 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1127 23:44:47.197393   21050 start_flags.go:323] config:
	{Name:ingress-addon-legacy-142525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-142525 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:44:47.197513   21050 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:44:47.199376   21050 out.go:177] * Starting control plane node ingress-addon-legacy-142525 in cluster ingress-addon-legacy-142525
	I1127 23:44:47.200707   21050 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:44:47.702122   21050 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1127 23:44:47.702168   21050 cache.go:56] Caching tarball of preloaded images
	I1127 23:44:47.702305   21050 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:44:47.704246   21050 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1127 23:44:47.705662   21050 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:44:47.822661   21050 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1127 23:45:02.550946   21050 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:45:02.551775   21050 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:45:03.534223   21050 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1127 23:45:03.534560   21050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/config.json ...
	I1127 23:45:03.534587   21050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/config.json: {Name:mkf37ba782e91a3f02fa946999053e5f78971791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:45:03.534745   21050 start.go:365] acquiring machines lock for ingress-addon-legacy-142525: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1127 23:45:03.534777   21050 start.go:369] acquired machines lock for "ingress-addon-legacy-142525" in 16.467µs
	I1127 23:45:03.534828   21050 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-142525 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-142525 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:45:03.534900   21050 start.go:125] createHost starting for "" (driver="kvm2")
	I1127 23:45:03.537701   21050 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1127 23:45:03.537838   21050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:45:03.537887   21050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:45:03.551750   21050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42257
	I1127 23:45:03.552223   21050 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:45:03.552778   21050 main.go:141] libmachine: Using API Version  1
	I1127 23:45:03.552803   21050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:45:03.553124   21050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:45:03.553294   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetMachineName
	I1127 23:45:03.553424   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:45:03.553564   21050 start.go:159] libmachine.API.Create for "ingress-addon-legacy-142525" (driver="kvm2")
	I1127 23:45:03.553593   21050 client.go:168] LocalClient.Create starting
	I1127 23:45:03.553632   21050 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem
	I1127 23:45:03.553667   21050 main.go:141] libmachine: Decoding PEM data...
	I1127 23:45:03.553685   21050 main.go:141] libmachine: Parsing certificate...
	I1127 23:45:03.553733   21050 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem
	I1127 23:45:03.553753   21050 main.go:141] libmachine: Decoding PEM data...
	I1127 23:45:03.553764   21050 main.go:141] libmachine: Parsing certificate...
	I1127 23:45:03.553781   21050 main.go:141] libmachine: Running pre-create checks...
	I1127 23:45:03.553790   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .PreCreateCheck
	I1127 23:45:03.554095   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetConfigRaw
	I1127 23:45:03.554474   21050 main.go:141] libmachine: Creating machine...
	I1127 23:45:03.554486   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .Create
	I1127 23:45:03.554660   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Creating KVM machine...
	I1127 23:45:03.555683   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found existing default KVM network
	I1127 23:45:03.556304   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:03.556156   21118 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I1127 23:45:03.561426   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | trying to create private KVM network mk-ingress-addon-legacy-142525 192.168.39.0/24...
	I1127 23:45:03.627842   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | private KVM network mk-ingress-addon-legacy-142525 192.168.39.0/24 created
	I1127 23:45:03.627919   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:03.627796   21118 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:45:03.627939   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Setting up store path in /home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525 ...
	I1127 23:45:03.627960   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Building disk image from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso
	I1127 23:45:03.627979   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Downloading /home/jenkins/minikube-integration/17206-4749/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso...
	I1127 23:45:03.828088   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:03.827966   21118 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa...
	I1127 23:45:03.940976   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:03.940824   21118 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/ingress-addon-legacy-142525.rawdisk...
	I1127 23:45:03.941004   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Writing magic tar header
	I1127 23:45:03.941016   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Writing SSH key tar header
	I1127 23:45:03.941025   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:03.940931   21118 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525 ...
	I1127 23:45:03.941037   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525
	I1127 23:45:03.941045   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines
	I1127 23:45:03.941060   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:45:03.941071   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749
	I1127 23:45:03.941085   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1127 23:45:03.941102   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Checking permissions on dir: /home/jenkins
	I1127 23:45:03.941118   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525 (perms=drwx------)
	I1127 23:45:03.941141   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines (perms=drwxr-xr-x)
	I1127 23:45:03.941148   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Checking permissions on dir: /home
	I1127 23:45:03.941157   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube (perms=drwxr-xr-x)
	I1127 23:45:03.941169   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749 (perms=drwxrwxr-x)
	I1127 23:45:03.941182   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Skipping /home - not owner
	I1127 23:45:03.941198   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1127 23:45:03.941217   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1127 23:45:03.941227   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Creating domain...
	I1127 23:45:03.942369   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) define libvirt domain using xml: 
	I1127 23:45:03.942402   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) <domain type='kvm'>
	I1127 23:45:03.942417   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   <name>ingress-addon-legacy-142525</name>
	I1127 23:45:03.942437   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   <memory unit='MiB'>4096</memory>
	I1127 23:45:03.942458   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   <vcpu>2</vcpu>
	I1127 23:45:03.942468   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   <features>
	I1127 23:45:03.942474   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <acpi/>
	I1127 23:45:03.942483   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <apic/>
	I1127 23:45:03.942493   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <pae/>
	I1127 23:45:03.942507   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     
	I1127 23:45:03.942522   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   </features>
	I1127 23:45:03.942541   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   <cpu mode='host-passthrough'>
	I1127 23:45:03.942550   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   
	I1127 23:45:03.942557   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   </cpu>
	I1127 23:45:03.942565   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   <os>
	I1127 23:45:03.942579   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <type>hvm</type>
	I1127 23:45:03.942594   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <boot dev='cdrom'/>
	I1127 23:45:03.942627   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <boot dev='hd'/>
	I1127 23:45:03.942655   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <bootmenu enable='no'/>
	I1127 23:45:03.942671   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   </os>
	I1127 23:45:03.942683   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   <devices>
	I1127 23:45:03.942699   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <disk type='file' device='cdrom'>
	I1127 23:45:03.942716   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/boot2docker.iso'/>
	I1127 23:45:03.942730   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <target dev='hdc' bus='scsi'/>
	I1127 23:45:03.942748   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <readonly/>
	I1127 23:45:03.942760   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     </disk>
	I1127 23:45:03.942773   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <disk type='file' device='disk'>
	I1127 23:45:03.942787   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1127 23:45:03.942809   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/ingress-addon-legacy-142525.rawdisk'/>
	I1127 23:45:03.942825   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <target dev='hda' bus='virtio'/>
	I1127 23:45:03.942837   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     </disk>
	I1127 23:45:03.942851   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <interface type='network'>
	I1127 23:45:03.942865   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <source network='mk-ingress-addon-legacy-142525'/>
	I1127 23:45:03.942883   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <model type='virtio'/>
	I1127 23:45:03.942904   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     </interface>
	I1127 23:45:03.942919   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <interface type='network'>
	I1127 23:45:03.942932   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <source network='default'/>
	I1127 23:45:03.942950   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <model type='virtio'/>
	I1127 23:45:03.942970   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     </interface>
	I1127 23:45:03.943006   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <serial type='pty'>
	I1127 23:45:03.943039   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <target port='0'/>
	I1127 23:45:03.943056   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     </serial>
	I1127 23:45:03.943070   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <console type='pty'>
	I1127 23:45:03.943086   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <target type='serial' port='0'/>
	I1127 23:45:03.943099   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     </console>
	I1127 23:45:03.943121   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     <rng model='virtio'>
	I1127 23:45:03.943142   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)       <backend model='random'>/dev/random</backend>
	I1127 23:45:03.943155   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     </rng>
	I1127 23:45:03.943165   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     
	I1127 23:45:03.943176   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)     
	I1127 23:45:03.943188   21050 main.go:141] libmachine: (ingress-addon-legacy-142525)   </devices>
	I1127 23:45:03.943202   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) </domain>
	I1127 23:45:03.943213   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) 
	I1127 23:45:03.947162   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:22:83:09 in network default
	I1127 23:45:03.947612   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Ensuring networks are active...
	I1127 23:45:03.947633   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:03.948151   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Ensuring network default is active
	I1127 23:45:03.948413   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Ensuring network mk-ingress-addon-legacy-142525 is active
	I1127 23:45:03.948887   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Getting domain xml...
	I1127 23:45:03.949461   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Creating domain...
	I1127 23:45:05.145719   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Waiting to get IP...
	I1127 23:45:05.146445   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:05.146811   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:05.146842   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:05.146777   21118 retry.go:31] will retry after 244.734674ms: waiting for machine to come up
	I1127 23:45:05.393176   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:05.393595   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:05.393625   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:05.393537   21118 retry.go:31] will retry after 387.79428ms: waiting for machine to come up
	I1127 23:45:05.782849   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:05.783203   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:05.783253   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:05.783147   21118 retry.go:31] will retry after 354.674889ms: waiting for machine to come up
	I1127 23:45:06.139596   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:06.139975   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:06.140010   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:06.139937   21118 retry.go:31] will retry after 584.807604ms: waiting for machine to come up
	I1127 23:45:06.726514   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:06.726960   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:06.726992   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:06.726906   21118 retry.go:31] will retry after 567.762739ms: waiting for machine to come up
	I1127 23:45:07.296557   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:07.296987   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:07.297019   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:07.296936   21118 retry.go:31] will retry after 738.769205ms: waiting for machine to come up
	I1127 23:45:08.036733   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:08.037110   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:08.037135   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:08.037059   21118 retry.go:31] will retry after 1.09830085s: waiting for machine to come up
	I1127 23:45:09.136710   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:09.137125   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:09.137157   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:09.137070   21118 retry.go:31] will retry after 1.003133351s: waiting for machine to come up
	I1127 23:45:10.141412   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:10.141754   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:10.141787   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:10.141698   21118 retry.go:31] will retry after 1.660154843s: waiting for machine to come up
	I1127 23:45:11.804005   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:11.804390   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:11.804423   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:11.804347   21118 retry.go:31] will retry after 1.740670037s: waiting for machine to come up
	I1127 23:45:13.547401   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:13.547847   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:13.547871   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:13.547794   21118 retry.go:31] will retry after 2.628484685s: waiting for machine to come up
	I1127 23:45:16.179819   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:16.180182   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:16.180210   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:16.180128   21118 retry.go:31] will retry after 2.342797946s: waiting for machine to come up
	I1127 23:45:18.525539   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:18.525895   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:18.525923   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:18.525853   21118 retry.go:31] will retry after 2.831424661s: waiting for machine to come up
	I1127 23:45:21.361115   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:21.361416   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find current IP address of domain ingress-addon-legacy-142525 in network mk-ingress-addon-legacy-142525
	I1127 23:45:21.361440   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | I1127 23:45:21.361363   21118 retry.go:31] will retry after 3.428785641s: waiting for machine to come up
	I1127 23:45:24.791919   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:24.792351   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Found IP for machine: 192.168.39.57
	I1127 23:45:24.792379   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Reserving static IP address...
	I1127 23:45:24.792398   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has current primary IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:24.792668   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-142525", mac: "52:54:00:e2:29:0b", ip: "192.168.39.57"} in network mk-ingress-addon-legacy-142525
	I1127 23:45:24.862588   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Getting to WaitForSSH function...
	I1127 23:45:24.862625   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Reserved static IP address: 192.168.39.57
	I1127 23:45:24.862640   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Waiting for SSH to be available...
	I1127 23:45:24.865309   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:24.865710   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:24.865744   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:24.865883   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Using SSH client type: external
	I1127 23:45:24.865907   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa (-rw-------)
	I1127 23:45:24.865941   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1127 23:45:24.865956   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | About to run SSH command:
	I1127 23:45:24.865985   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | exit 0
	I1127 23:45:24.961125   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | SSH cmd err, output: <nil>: 
	I1127 23:45:24.961400   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) KVM machine creation complete!
	I1127 23:45:24.961671   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetConfigRaw
	I1127 23:45:24.962223   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:45:24.962427   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:45:24.962591   21050 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1127 23:45:24.962609   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetState
	I1127 23:45:24.963916   21050 main.go:141] libmachine: Detecting operating system of created instance...
	I1127 23:45:24.963952   21050 main.go:141] libmachine: Waiting for SSH to be available...
	I1127 23:45:24.963965   21050 main.go:141] libmachine: Getting to WaitForSSH function...
	I1127 23:45:24.963980   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:24.966272   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:24.966614   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:24.966647   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:24.966771   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:24.966945   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:24.967107   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:24.967236   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:24.967384   21050 main.go:141] libmachine: Using SSH client type: native
	I1127 23:45:24.967719   21050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1127 23:45:24.967731   21050 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1127 23:45:25.088127   21050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:45:25.088150   21050 main.go:141] libmachine: Detecting the provisioner...
	I1127 23:45:25.088162   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:25.090768   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.091080   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:25.091101   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.091255   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:25.091461   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.091616   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.091780   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:25.091922   21050 main.go:141] libmachine: Using SSH client type: native
	I1127 23:45:25.092225   21050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1127 23:45:25.092237   21050 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1127 23:45:25.213457   21050 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g8be4f20-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1127 23:45:25.213528   21050 main.go:141] libmachine: found compatible host: buildroot
	I1127 23:45:25.213539   21050 main.go:141] libmachine: Provisioning with buildroot...
	I1127 23:45:25.213553   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetMachineName
	I1127 23:45:25.213759   21050 buildroot.go:166] provisioning hostname "ingress-addon-legacy-142525"
	I1127 23:45:25.213786   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetMachineName
	I1127 23:45:25.214001   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:25.216644   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.217012   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:25.217045   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.217150   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:25.217333   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.217497   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.217640   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:25.217807   21050 main.go:141] libmachine: Using SSH client type: native
	I1127 23:45:25.218245   21050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1127 23:45:25.218265   21050 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-142525 && echo "ingress-addon-legacy-142525" | sudo tee /etc/hostname
	I1127 23:45:25.353102   21050 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-142525
	
	I1127 23:45:25.353129   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:25.355916   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.356225   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:25.356258   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.356403   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:25.356594   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.356791   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.356932   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:25.357107   21050 main.go:141] libmachine: Using SSH client type: native
	I1127 23:45:25.357412   21050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1127 23:45:25.357433   21050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-142525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-142525/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-142525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:45:25.489338   21050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:45:25.489368   21050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1127 23:45:25.489400   21050 buildroot.go:174] setting up certificates
	I1127 23:45:25.489415   21050 provision.go:83] configureAuth start
	I1127 23:45:25.489426   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetMachineName
	I1127 23:45:25.489721   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetIP
	I1127 23:45:25.492511   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.492889   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:25.492921   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.493117   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:25.495299   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.495621   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:25.495655   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.495777   21050 provision.go:138] copyHostCerts
	I1127 23:45:25.495819   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1127 23:45:25.495859   21050 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1127 23:45:25.495870   21050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1127 23:45:25.495951   21050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1127 23:45:25.496026   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1127 23:45:25.496045   21050 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1127 23:45:25.496051   21050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1127 23:45:25.496074   21050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1127 23:45:25.496114   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1127 23:45:25.496129   21050 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1127 23:45:25.496135   21050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1127 23:45:25.496154   21050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1127 23:45:25.496198   21050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-142525 san=[192.168.39.57 192.168.39.57 localhost 127.0.0.1 minikube ingress-addon-legacy-142525]
	I1127 23:45:25.611244   21050 provision.go:172] copyRemoteCerts
	I1127 23:45:25.611311   21050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:45:25.611333   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:25.614088   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.614450   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:25.614481   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.614653   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:25.614821   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.614997   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:25.615184   21050 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa Username:docker}
	I1127 23:45:25.706095   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:45:25.706181   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1127 23:45:25.729644   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:45:25.729729   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:45:25.752855   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:45:25.752932   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1127 23:45:25.775944   21050 provision.go:86] duration metric: configureAuth took 286.518977ms
	I1127 23:45:25.775985   21050 buildroot.go:189] setting minikube options for container-runtime
	I1127 23:45:25.776171   21050 config.go:182] Loaded profile config "ingress-addon-legacy-142525": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1127 23:45:25.776255   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:25.778567   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.778866   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:25.778899   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:25.779027   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:25.779198   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.779339   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:25.779454   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:25.779626   21050 main.go:141] libmachine: Using SSH client type: native
	I1127 23:45:25.779961   21050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1127 23:45:25.779989   21050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:45:26.107485   21050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:45:26.107517   21050 main.go:141] libmachine: Checking connection to Docker...
	I1127 23:45:26.107530   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetURL
	I1127 23:45:26.108917   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Using libvirt version 6000000
	I1127 23:45:26.111186   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.111547   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:26.111576   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.111701   21050 main.go:141] libmachine: Docker is up and running!
	I1127 23:45:26.111719   21050 main.go:141] libmachine: Reticulating splines...
	I1127 23:45:26.111728   21050 client.go:171] LocalClient.Create took 22.558125267s
	I1127 23:45:26.111758   21050 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-142525" took 22.558195994s
	I1127 23:45:26.111782   21050 start.go:300] post-start starting for "ingress-addon-legacy-142525" (driver="kvm2")
	I1127 23:45:26.111799   21050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:45:26.111827   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:45:26.112049   21050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:45:26.112072   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:26.114169   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.114446   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:26.114473   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.114582   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:26.114748   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:26.114897   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:26.115057   21050 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa Username:docker}
	I1127 23:45:26.201786   21050 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:45:26.206556   21050 info.go:137] Remote host: Buildroot 2021.02.12
	I1127 23:45:26.206578   21050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1127 23:45:26.206640   21050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1127 23:45:26.206741   21050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1127 23:45:26.206756   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /etc/ssl/certs/119302.pem
	I1127 23:45:26.206882   21050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:45:26.214890   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1127 23:45:26.237233   21050 start.go:303] post-start completed in 125.433082ms
	I1127 23:45:26.237281   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetConfigRaw
	I1127 23:45:26.237804   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetIP
	I1127 23:45:26.240266   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.240588   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:26.240634   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.240889   21050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/config.json ...
	I1127 23:45:26.241086   21050 start.go:128] duration metric: createHost completed in 22.706177042s
	I1127 23:45:26.241113   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:26.243320   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.243684   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:26.243712   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.243862   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:26.244065   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:26.244233   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:26.244403   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:26.244602   21050 main.go:141] libmachine: Using SSH client type: native
	I1127 23:45:26.244969   21050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1127 23:45:26.244983   21050 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1127 23:45:26.365577   21050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701128726.331830925
	
	I1127 23:45:26.365599   21050 fix.go:206] guest clock: 1701128726.331830925
	I1127 23:45:26.365608   21050 fix.go:219] Guest: 2023-11-27 23:45:26.331830925 +0000 UTC Remote: 2023-11-27 23:45:26.241099392 +0000 UTC m=+39.159842466 (delta=90.731533ms)
	I1127 23:45:26.365631   21050 fix.go:190] guest clock delta is within tolerance: 90.731533ms
	I1127 23:45:26.365637   21050 start.go:83] releasing machines lock for "ingress-addon-legacy-142525", held for 22.830850772s
	I1127 23:45:26.365663   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:45:26.365976   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetIP
	I1127 23:45:26.368691   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.368983   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:26.369015   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.369149   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:45:26.369620   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:45:26.369799   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:45:26.369909   21050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:45:26.369950   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:26.370047   21050 ssh_runner.go:195] Run: cat /version.json
	I1127 23:45:26.370072   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:45:26.372390   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.372662   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:26.372697   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.372723   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.372842   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:26.373045   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:26.373194   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:26.373200   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:26.373229   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:26.373364   21050 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa Username:docker}
	I1127 23:45:26.373430   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:45:26.373602   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:45:26.373722   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:45:26.373886   21050 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa Username:docker}
	I1127 23:45:26.458362   21050 ssh_runner.go:195] Run: systemctl --version
	I1127 23:45:26.483886   21050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:45:26.640189   21050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1127 23:45:26.646034   21050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1127 23:45:26.646093   21050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:45:26.659982   21050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 23:45:26.660016   21050 start.go:472] detecting cgroup driver to use...
	I1127 23:45:26.660070   21050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:45:26.672387   21050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:45:26.684123   21050 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:45:26.684175   21050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:45:26.695718   21050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:45:26.707375   21050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:45:26.808760   21050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:45:26.931634   21050 docker.go:219] disabling docker service ...
	I1127 23:45:26.931704   21050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:45:26.944873   21050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:45:26.956311   21050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:45:27.070591   21050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:45:27.182197   21050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:45:27.194013   21050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:45:27.211071   21050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1127 23:45:27.211156   21050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:45:27.220308   21050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:45:27.220370   21050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:45:27.229428   21050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:45:27.238767   21050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:45:27.247701   21050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:45:27.256944   21050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:45:27.264958   21050 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1127 23:45:27.265030   21050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1127 23:45:27.277160   21050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:45:27.285864   21050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:45:27.395839   21050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:45:27.566305   21050 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:45:27.566385   21050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:45:27.571272   21050 start.go:540] Will wait 60s for crictl version
	I1127 23:45:27.571323   21050 ssh_runner.go:195] Run: which crictl
	I1127 23:45:27.575006   21050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:45:27.613338   21050 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1127 23:45:27.613454   21050 ssh_runner.go:195] Run: crio --version
	I1127 23:45:27.658231   21050 ssh_runner.go:195] Run: crio --version
	I1127 23:45:27.705825   21050 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1127 23:45:27.707293   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetIP
	I1127 23:45:27.709831   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:27.710187   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:45:27.710224   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:45:27.710408   21050 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1127 23:45:27.714626   21050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:45:27.727240   21050 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1127 23:45:27.727300   21050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:45:27.760837   21050 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1127 23:45:27.760911   21050 ssh_runner.go:195] Run: which lz4
	I1127 23:45:27.765014   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1127 23:45:27.765089   21050 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1127 23:45:27.769208   21050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 23:45:27.769233   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1127 23:45:29.602278   21050 crio.go:444] Took 1.837208 seconds to copy over tarball
	I1127 23:45:29.602348   21050 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1127 23:45:32.540010   21050 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.937629764s)
	I1127 23:45:32.540048   21050 crio.go:451] Took 2.937744 seconds to extract the tarball
	I1127 23:45:32.540062   21050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1127 23:45:32.582789   21050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:45:32.633179   21050 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1127 23:45:32.633206   21050 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1127 23:45:32.633245   21050 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:45:32.633280   21050 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:45:32.633305   21050 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1127 23:45:32.633323   21050 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1127 23:45:32.633294   21050 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:45:32.633429   21050 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:45:32.633465   21050 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:45:32.633549   21050 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:45:32.634575   21050 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:45:32.634594   21050 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1127 23:45:32.634586   21050 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:45:32.634601   21050 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1127 23:45:32.634611   21050 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:45:32.634601   21050 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:45:32.634621   21050 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:45:32.634575   21050 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:45:32.788212   21050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:45:32.791106   21050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1127 23:45:32.796386   21050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:45:32.803944   21050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:45:32.806762   21050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1127 23:45:32.813734   21050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:45:32.821046   21050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1127 23:45:32.880968   21050 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1127 23:45:32.881008   21050 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:45:32.881064   21050 ssh_runner.go:195] Run: which crictl
	I1127 23:45:32.891793   21050 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1127 23:45:32.891836   21050 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1127 23:45:32.891891   21050 ssh_runner.go:195] Run: which crictl
	I1127 23:45:32.950926   21050 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1127 23:45:32.950971   21050 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:45:32.951023   21050 ssh_runner.go:195] Run: which crictl
	I1127 23:45:32.955053   21050 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1127 23:45:32.955099   21050 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:45:32.955149   21050 ssh_runner.go:195] Run: which crictl
	I1127 23:45:32.975594   21050 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1127 23:45:32.975630   21050 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1127 23:45:32.975683   21050 ssh_runner.go:195] Run: which crictl
	I1127 23:45:32.976903   21050 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1127 23:45:32.976961   21050 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:45:32.977012   21050 ssh_runner.go:195] Run: which crictl
	I1127 23:45:33.000353   21050 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1127 23:45:33.000412   21050 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:45:33.000432   21050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:45:33.000442   21050 ssh_runner.go:195] Run: which crictl
	I1127 23:45:33.000498   21050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1127 23:45:33.000525   21050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:45:33.000565   21050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:45:33.000632   21050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1127 23:45:33.000665   21050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:45:33.137688   21050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1127 23:45:33.137781   21050 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1127 23:45:33.137783   21050 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1127 23:45:33.137848   21050 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1127 23:45:33.137853   21050 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1127 23:45:33.137935   21050 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1127 23:45:33.137974   21050 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1127 23:45:33.173937   21050 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1127 23:45:33.605179   21050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:45:33.742540   21050 cache_images.go:92] LoadImages completed in 1.109306026s
	W1127 23:45:33.742657   21050 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1127 23:45:33.742743   21050 ssh_runner.go:195] Run: crio config
	I1127 23:45:33.796787   21050 cni.go:84] Creating CNI manager for ""
	I1127 23:45:33.796811   21050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:45:33.796828   21050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:45:33.796854   21050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-142525 NodeName:ingress-addon-legacy-142525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1127 23:45:33.797016   21050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-142525"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:45:33.797219   21050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-142525 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-142525 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:45:33.797305   21050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1127 23:45:33.806160   21050 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:45:33.806259   21050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:45:33.814702   21050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1127 23:45:33.830783   21050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1127 23:45:33.846986   21050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1127 23:45:33.863641   21050 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I1127 23:45:33.867520   21050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:45:33.880761   21050 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525 for IP: 192.168.39.57
	I1127 23:45:33.880796   21050 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:45:33.880946   21050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1127 23:45:33.881011   21050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1127 23:45:33.881076   21050 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.key
	I1127 23:45:33.881093   21050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt with IP's: []
	I1127 23:45:34.003409   21050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt ...
	I1127 23:45:34.003447   21050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: {Name:mk0105b591213ed6cc1af8468b7dcbfb9e38eff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:45:34.003656   21050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.key ...
	I1127 23:45:34.003674   21050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.key: {Name:mkf581fd1870b78d8858f116d7d260935a6085a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:45:34.003789   21050 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.key.e9c877e8
	I1127 23:45:34.003875   21050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.crt.e9c877e8 with IP's: [192.168.39.57 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:45:34.075260   21050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.crt.e9c877e8 ...
	I1127 23:45:34.075295   21050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.crt.e9c877e8: {Name:mk529e487e2d2279e1544213c0499836627b6154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:45:34.075477   21050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.key.e9c877e8 ...
	I1127 23:45:34.075496   21050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.key.e9c877e8: {Name:mk6f164b6a438a19a3f687313e47438a3a46d62d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:45:34.075600   21050 certs.go:337] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.crt.e9c877e8 -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.crt
	I1127 23:45:34.075696   21050 certs.go:341] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.key.e9c877e8 -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.key
	I1127 23:45:34.075780   21050 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.key
	I1127 23:45:34.075812   21050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.crt with IP's: []
	I1127 23:45:34.267639   21050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.crt ...
	I1127 23:45:34.267673   21050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.crt: {Name:mk549bf2b28ae31a16926622ec9de31effee7930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:45:34.267828   21050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.key ...
	I1127 23:45:34.267841   21050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.key: {Name:mk4aa06d012898b0828ced7a6aa0ccecc95bce06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:45:34.267909   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 23:45:34.267931   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 23:45:34.267944   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 23:45:34.267956   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 23:45:34.267972   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:45:34.267988   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:45:34.268000   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:45:34.268011   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:45:34.268059   21050 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1127 23:45:34.268093   21050 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1127 23:45:34.268103   21050 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:45:34.268126   21050 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:45:34.268149   21050 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:45:34.268176   21050 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1127 23:45:34.268214   21050 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1127 23:45:34.268237   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:45:34.268249   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem -> /usr/share/ca-certificates/11930.pem
	I1127 23:45:34.268260   21050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /usr/share/ca-certificates/119302.pem
	I1127 23:45:34.268855   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:45:34.298255   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:45:34.321785   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:45:34.344922   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1127 23:45:34.366495   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:45:34.389631   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1127 23:45:34.412645   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:45:34.435937   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:45:34.459061   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:45:34.481907   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1127 23:45:34.505935   21050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1127 23:45:34.529261   21050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:45:34.545033   21050 ssh_runner.go:195] Run: openssl version
	I1127 23:45:34.550441   21050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1127 23:45:34.559503   21050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1127 23:45:34.563958   21050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1127 23:45:34.564023   21050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1127 23:45:34.569287   21050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1127 23:45:34.578521   21050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1127 23:45:34.588339   21050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1127 23:45:34.593279   21050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1127 23:45:34.593353   21050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1127 23:45:34.598807   21050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:45:34.608308   21050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:45:34.618132   21050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:45:34.622877   21050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:45:34.622944   21050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:45:34.628649   21050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:45:34.638794   21050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:45:34.643173   21050 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:45:34.643233   21050 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-142525 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-142525 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:45:34.643347   21050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:45:34.643408   21050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:45:34.681292   21050 cri.go:89] found id: ""
	I1127 23:45:34.681404   21050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:45:34.689914   21050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:45:34.698145   21050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:45:34.706586   21050 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:45:34.706648   21050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1127 23:45:34.758702   21050 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1127 23:45:34.758815   21050 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:45:34.892528   21050 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:45:34.892663   21050 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:45:34.892799   21050 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:45:35.120259   21050 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:45:35.121579   21050 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:45:35.121684   21050 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:45:35.238002   21050 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:45:35.240206   21050 out.go:204]   - Generating certificates and keys ...
	I1127 23:45:35.240309   21050 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:45:35.240419   21050 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:45:35.588519   21050 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:45:35.754854   21050 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:45:35.835479   21050 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:45:35.949512   21050 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:45:36.467513   21050 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:45:36.467987   21050 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-142525 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I1127 23:45:36.738114   21050 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:45:36.738759   21050 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-142525 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I1127 23:45:36.857371   21050 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:45:37.107656   21050 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:45:37.512858   21050 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:45:37.513267   21050 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:45:37.663844   21050 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:45:37.853512   21050 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:45:38.144347   21050 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:45:38.202604   21050 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:45:38.203520   21050 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:45:38.215681   21050 out.go:204]   - Booting up control plane ...
	I1127 23:45:38.215835   21050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:45:38.215952   21050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:45:38.216058   21050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:45:38.216155   21050 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:45:38.218043   21050 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:45:46.215421   21050 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002490 seconds
	I1127 23:45:46.215579   21050 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:45:46.230576   21050 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:45:46.753254   21050 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:45:46.753419   21050 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-142525 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1127 23:45:47.264864   21050 kubeadm.go:322] [bootstrap-token] Using token: b413mq.fzg3r170vsus262r
	I1127 23:45:47.266324   21050 out.go:204]   - Configuring RBAC rules ...
	I1127 23:45:47.266420   21050 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:45:47.272692   21050 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:45:47.287571   21050 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:45:47.291644   21050 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:45:47.294634   21050 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:45:47.296844   21050 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:45:47.316987   21050 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:45:47.568133   21050 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:45:47.714077   21050 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:45:47.715264   21050 kubeadm.go:322] 
	I1127 23:45:47.715357   21050 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:45:47.715395   21050 kubeadm.go:322] 
	I1127 23:45:47.715500   21050 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:45:47.715511   21050 kubeadm.go:322] 
	I1127 23:45:47.715542   21050 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:45:47.715616   21050 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:45:47.715683   21050 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:45:47.715694   21050 kubeadm.go:322] 
	I1127 23:45:47.715758   21050 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:45:47.715864   21050 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:45:47.715962   21050 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:45:47.715972   21050 kubeadm.go:322] 
	I1127 23:45:47.716101   21050 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:45:47.716203   21050 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:45:47.716218   21050 kubeadm.go:322] 
	I1127 23:45:47.716335   21050 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token b413mq.fzg3r170vsus262r \
	I1127 23:45:47.716487   21050 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1127 23:45:47.716524   21050 kubeadm.go:322]     --control-plane 
	I1127 23:45:47.716535   21050 kubeadm.go:322] 
	I1127 23:45:47.716647   21050 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:45:47.716660   21050 kubeadm.go:322] 
	I1127 23:45:47.716767   21050 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token b413mq.fzg3r170vsus262r \
	I1127 23:45:47.716911   21050 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1127 23:45:47.717118   21050 kubeadm.go:322] W1127 23:45:34.737574     959 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1127 23:45:47.717257   21050 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:45:47.717427   21050 kubeadm.go:322] W1127 23:45:38.195981     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 23:45:47.717608   21050 kubeadm.go:322] W1127 23:45:38.197477     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 23:45:47.717621   21050 cni.go:84] Creating CNI manager for ""
	I1127 23:45:47.717632   21050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:45:47.719304   21050 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1127 23:45:47.720708   21050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1127 23:45:47.732218   21050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1127 23:45:47.748210   21050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:45:47.748301   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:47.748325   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=ingress-addon-legacy-142525 minikube.k8s.io/updated_at=2023_11_27T23_45_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:47.763880   21050 ops.go:34] apiserver oom_adj: -16
	I1127 23:45:48.284152   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:48.420002   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:49.037222   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:49.536307   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:50.036704   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:50.537200   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:51.036810   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:51.537138   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:52.036451   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:52.537217   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:53.036553   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:53.537135   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:54.037249   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:54.537087   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:55.036443   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:55.536659   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:56.036335   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:56.536806   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:57.036906   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:57.536848   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:58.037329   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:58.536359   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:59.036654   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:45:59.536718   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:46:00.037325   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:46:00.536909   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:46:01.037091   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:46:01.536854   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:46:02.037104   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:46:02.536842   21050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:46:03.230230   21050 kubeadm.go:1081] duration metric: took 15.481970329s to wait for elevateKubeSystemPrivileges.
	I1127 23:46:03.230273   21050 kubeadm.go:406] StartCluster complete in 28.587042988s
	I1127 23:46:03.230294   21050 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:46:03.230367   21050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:46:03.231051   21050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:46:03.231273   21050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:46:03.231347   21050 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 23:46:03.231420   21050 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-142525"
	I1127 23:46:03.231439   21050 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-142525"
	I1127 23:46:03.231445   21050 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-142525"
	I1127 23:46:03.231463   21050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-142525"
	I1127 23:46:03.231509   21050 host.go:66] Checking if "ingress-addon-legacy-142525" exists ...
	I1127 23:46:03.231524   21050 config.go:182] Loaded profile config "ingress-addon-legacy-142525": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1127 23:46:03.231949   21050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:46:03.231980   21050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:46:03.231948   21050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:46:03.232019   21050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:46:03.231993   21050 kapi.go:59] client config for ingress-addon-legacy-142525: &rest.Config{Host:"https://192.168.39.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:46:03.232696   21050 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 23:46:03.246278   21050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I1127 23:46:03.246630   21050 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:46:03.247079   21050 main.go:141] libmachine: Using API Version  1
	I1127 23:46:03.247107   21050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:46:03.247434   21050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:46:03.247893   21050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:46:03.247917   21050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:46:03.251500   21050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I1127 23:46:03.252005   21050 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:46:03.252519   21050 main.go:141] libmachine: Using API Version  1
	I1127 23:46:03.252546   21050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:46:03.252831   21050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:46:03.252982   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetState
	I1127 23:46:03.255319   21050 kapi.go:59] client config for ingress-addon-legacy-142525: &rest.Config{Host:"https://192.168.39.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:46:03.255628   21050 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-142525"
	I1127 23:46:03.255668   21050 host.go:66] Checking if "ingress-addon-legacy-142525" exists ...
	I1127 23:46:03.256077   21050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:46:03.256122   21050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:46:03.263638   21050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1127 23:46:03.264096   21050 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:46:03.264598   21050 main.go:141] libmachine: Using API Version  1
	I1127 23:46:03.264629   21050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:46:03.264969   21050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:46:03.265152   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetState
	I1127 23:46:03.267092   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:46:03.269491   21050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:46:03.271186   21050 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:46:03.271207   21050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:46:03.271229   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:46:03.271257   21050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I1127 23:46:03.271740   21050 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:46:03.272276   21050 main.go:141] libmachine: Using API Version  1
	I1127 23:46:03.272297   21050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:46:03.272664   21050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:46:03.273296   21050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:46:03.273335   21050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:46:03.275246   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:46:03.275772   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:46:03.275800   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:46:03.276076   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:46:03.276271   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:46:03.276440   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:46:03.276574   21050 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa Username:docker}
	I1127 23:46:03.288387   21050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I1127 23:46:03.288840   21050 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:46:03.289293   21050 main.go:141] libmachine: Using API Version  1
	I1127 23:46:03.289317   21050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:46:03.289636   21050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:46:03.289838   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetState
	I1127 23:46:03.291364   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .DriverName
	I1127 23:46:03.291618   21050 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:46:03.291635   21050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:46:03.291653   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHHostname
	I1127 23:46:03.294370   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:46:03.294743   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:29:0b", ip: ""} in network mk-ingress-addon-legacy-142525: {Iface:virbr1 ExpiryTime:2023-11-28 00:45:19 +0000 UTC Type:0 Mac:52:54:00:e2:29:0b Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-142525 Clientid:01:52:54:00:e2:29:0b}
	I1127 23:46:03.294770   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | domain ingress-addon-legacy-142525 has defined IP address 192.168.39.57 and MAC address 52:54:00:e2:29:0b in network mk-ingress-addon-legacy-142525
	I1127 23:46:03.294906   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHPort
	I1127 23:46:03.295064   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHKeyPath
	I1127 23:46:03.295212   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .GetSSHUsername
	I1127 23:46:03.295358   21050 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/ingress-addon-legacy-142525/id_rsa Username:docker}
	I1127 23:46:03.365719   21050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-142525" context rescaled to 1 replicas
	I1127 23:46:03.365754   21050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:46:03.367312   21050 out.go:177] * Verifying Kubernetes components...
	I1127 23:46:03.368855   21050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:46:03.495536   21050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:46:03.502440   21050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:46:03.659315   21050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:46:03.659851   21050 kapi.go:59] client config for ingress-addon-legacy-142525: &rest.Config{Host:"https://192.168.39.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:46:03.660148   21050 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-142525" to be "Ready" ...
	I1127 23:46:03.740598   21050 node_ready.go:49] node "ingress-addon-legacy-142525" has status "Ready":"True"
	I1127 23:46:03.740648   21050 node_ready.go:38] duration metric: took 80.469448ms waiting for node "ingress-addon-legacy-142525" to be "Ready" ...
	I1127 23:46:03.740662   21050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:46:04.072630   21050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-28629" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:04.480901   21050 main.go:141] libmachine: Making call to close driver server
	I1127 23:46:04.480943   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .Close
	I1127 23:46:04.480964   21050 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1127 23:46:04.480917   21050 main.go:141] libmachine: Making call to close driver server
	I1127 23:46:04.481025   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .Close
	I1127 23:46:04.481247   21050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:46:04.481291   21050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:46:04.481312   21050 main.go:141] libmachine: Making call to close driver server
	I1127 23:46:04.481320   21050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:46:04.481361   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Closing plugin on server side
	I1127 23:46:04.481367   21050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:46:04.481387   21050 main.go:141] libmachine: Making call to close driver server
	I1127 23:46:04.481397   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .Close
	I1127 23:46:04.481332   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Closing plugin on server side
	I1127 23:46:04.481581   21050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:46:04.481645   21050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:46:04.481330   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .Close
	I1127 23:46:04.481626   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) DBG | Closing plugin on server side
	I1127 23:46:04.482009   21050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:46:04.482026   21050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:46:04.493965   21050 main.go:141] libmachine: Making call to close driver server
	I1127 23:46:04.493982   21050 main.go:141] libmachine: (ingress-addon-legacy-142525) Calling .Close
	I1127 23:46:04.494233   21050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:46:04.494250   21050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:46:04.496066   21050 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1127 23:46:04.497422   21050 addons.go:502] enable addons completed in 1.266071373s: enabled=[storage-provisioner default-storageclass]
	I1127 23:46:04.616164   21050 pod_ready.go:97] error getting pod "coredns-66bff467f8-28629" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-28629" not found
	I1127 23:46:04.616201   21050 pod_ready.go:81] duration metric: took 543.535456ms waiting for pod "coredns-66bff467f8-28629" in "kube-system" namespace to be "Ready" ...
	E1127 23:46:04.616216   21050 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-28629" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-28629" not found
	I1127 23:46:04.616224   21050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:06.638835   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:09.137401   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:11.137782   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:13.636518   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:16.136944   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:18.138539   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:20.637570   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:23.137900   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:25.636447   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:27.638156   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:30.137755   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:32.637711   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:35.137838   21050 pod_ready.go:102] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"False"
	I1127 23:46:36.637054   21050 pod_ready.go:92] pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace has status "Ready":"True"
	I1127 23:46:36.637105   21050 pod_ready.go:81] duration metric: took 32.020872387s waiting for pod "coredns-66bff467f8-vd4q5" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.637118   21050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-142525" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.642542   21050 pod_ready.go:92] pod "etcd-ingress-addon-legacy-142525" in "kube-system" namespace has status "Ready":"True"
	I1127 23:46:36.642559   21050 pod_ready.go:81] duration metric: took 5.433965ms waiting for pod "etcd-ingress-addon-legacy-142525" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.642571   21050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-142525" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.646982   21050 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-142525" in "kube-system" namespace has status "Ready":"True"
	I1127 23:46:36.646999   21050 pod_ready.go:81] duration metric: took 4.420052ms waiting for pod "kube-apiserver-ingress-addon-legacy-142525" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.647009   21050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-142525" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.651358   21050 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-142525" in "kube-system" namespace has status "Ready":"True"
	I1127 23:46:36.651374   21050 pod_ready.go:81] duration metric: took 4.358378ms waiting for pod "kube-controller-manager-ingress-addon-legacy-142525" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.651384   21050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhr7p" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.655649   21050 pod_ready.go:92] pod "kube-proxy-rhr7p" in "kube-system" namespace has status "Ready":"True"
	I1127 23:46:36.655663   21050 pod_ready.go:81] duration metric: took 4.271864ms waiting for pod "kube-proxy-rhr7p" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.655673   21050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-142525" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:36.831017   21050 request.go:629] Waited for 175.263229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-142525
	I1127 23:46:37.031642   21050 request.go:629] Waited for 197.371255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ingress-addon-legacy-142525
	I1127 23:46:37.035286   21050 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-142525" in "kube-system" namespace has status "Ready":"True"
	I1127 23:46:37.035308   21050 pod_ready.go:81] duration metric: took 379.62806ms waiting for pod "kube-scheduler-ingress-addon-legacy-142525" in "kube-system" namespace to be "Ready" ...
	I1127 23:46:37.035315   21050 pod_ready.go:38] duration metric: took 33.294636127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:46:37.035331   21050 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:46:37.035380   21050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:46:37.049612   21050 api_server.go:72] duration metric: took 33.683821366s to wait for apiserver process to appear ...
	I1127 23:46:37.049638   21050 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:46:37.049652   21050 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I1127 23:46:37.055256   21050 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I1127 23:46:37.056106   21050 api_server.go:141] control plane version: v1.18.20
	I1127 23:46:37.056130   21050 api_server.go:131] duration metric: took 6.485934ms to wait for apiserver health ...
	I1127 23:46:37.056140   21050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:46:37.231652   21050 request.go:629] Waited for 175.454204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I1127 23:46:37.237161   21050 system_pods.go:59] 7 kube-system pods found
	I1127 23:46:37.237194   21050 system_pods.go:61] "coredns-66bff467f8-vd4q5" [2d07a1a9-173e-4270-b114-da5a7cde215c] Running
	I1127 23:46:37.237200   21050 system_pods.go:61] "etcd-ingress-addon-legacy-142525" [73a4bf92-c0fa-4103-ada7-4fd27cd82170] Running
	I1127 23:46:37.237204   21050 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-142525" [fc35bf2f-20eb-4eb0-95fb-9bb7485f4bd5] Running
	I1127 23:46:37.237208   21050 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-142525" [d4d3e556-7352-4a14-af79-ec8bc926f95a] Running
	I1127 23:46:37.237212   21050 system_pods.go:61] "kube-proxy-rhr7p" [9961fec5-e7fd-4a03-aa60-1143daf1ef01] Running
	I1127 23:46:37.237216   21050 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-142525" [e035372d-e83d-4f5c-8df9-e328524ba105] Running
	I1127 23:46:37.237220   21050 system_pods.go:61] "storage-provisioner" [3a595214-eb1e-4d62-9624-e92ddd58c303] Running
	I1127 23:46:37.237225   21050 system_pods.go:74] duration metric: took 181.080275ms to wait for pod list to return data ...
	I1127 23:46:37.237232   21050 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:46:37.431675   21050 request.go:629] Waited for 194.374321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:46:37.434462   21050 default_sa.go:45] found service account: "default"
	I1127 23:46:37.434484   21050 default_sa.go:55] duration metric: took 197.246964ms for default service account to be created ...
	I1127 23:46:37.434491   21050 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:46:37.630845   21050 request.go:629] Waited for 196.302514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I1127 23:46:37.636399   21050 system_pods.go:86] 7 kube-system pods found
	I1127 23:46:37.636423   21050 system_pods.go:89] "coredns-66bff467f8-vd4q5" [2d07a1a9-173e-4270-b114-da5a7cde215c] Running
	I1127 23:46:37.636428   21050 system_pods.go:89] "etcd-ingress-addon-legacy-142525" [73a4bf92-c0fa-4103-ada7-4fd27cd82170] Running
	I1127 23:46:37.636432   21050 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-142525" [fc35bf2f-20eb-4eb0-95fb-9bb7485f4bd5] Running
	I1127 23:46:37.636436   21050 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-142525" [d4d3e556-7352-4a14-af79-ec8bc926f95a] Running
	I1127 23:46:37.636440   21050 system_pods.go:89] "kube-proxy-rhr7p" [9961fec5-e7fd-4a03-aa60-1143daf1ef01] Running
	I1127 23:46:37.636444   21050 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-142525" [e035372d-e83d-4f5c-8df9-e328524ba105] Running
	I1127 23:46:37.636447   21050 system_pods.go:89] "storage-provisioner" [3a595214-eb1e-4d62-9624-e92ddd58c303] Running
	I1127 23:46:37.636454   21050 system_pods.go:126] duration metric: took 201.958033ms to wait for k8s-apps to be running ...
	I1127 23:46:37.636464   21050 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:46:37.636515   21050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:46:37.650694   21050 system_svc.go:56] duration metric: took 14.224072ms WaitForService to wait for kubelet.
	I1127 23:46:37.650714   21050 kubeadm.go:581] duration metric: took 34.284930119s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:46:37.650729   21050 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:46:37.831170   21050 request.go:629] Waited for 180.361875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes
	I1127 23:46:37.834470   21050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 23:46:37.834495   21050 node_conditions.go:123] node cpu capacity is 2
	I1127 23:46:37.834528   21050 node_conditions.go:105] duration metric: took 183.7956ms to run NodePressure ...
	I1127 23:46:37.834543   21050 start.go:228] waiting for startup goroutines ...
	I1127 23:46:37.834552   21050 start.go:233] waiting for cluster config update ...
	I1127 23:46:37.834560   21050 start.go:242] writing updated cluster config ...
	I1127 23:46:37.834819   21050 ssh_runner.go:195] Run: rm -f paused
	I1127 23:46:37.881970   21050 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1127 23:46:37.883899   21050 out.go:177] 
	W1127 23:46:37.885350   21050 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1127 23:46:37.886780   21050 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1127 23:46:37.888114   21050 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-142525" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-11-27 23:45:15 UTC, ends at Mon 2023-11-27 23:49:56 UTC. --
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.503463492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c0231660-1df8-4564-a9e2-4e21207224f4 name=/runtime.v1.RuntimeService/Version
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.504910446Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8ffe53f1-3757-4bc2-a636-987e449c7931 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.505426951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701128996505414784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=8ffe53f1-3757-4bc2-a636-987e449c7931 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.506356984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=47e420a2-f93f-4416-8092-96ed23a149b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.506407720Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=47e420a2-f93f-4416-8092-96ed23a149b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.506769505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18980423a0708c67e1494afabff9148724336deb04c3d15dd2f6bc04cbfd0f76,PodSandboxId:7759310d9b079979ff97c0105f43d4cef2cc82c7afcce19c6dd99be46df6afa5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701128982835742858,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-95999,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39b7b876-84ac-48c3-8fb4-cc346e5b255a,},Annotations:map[string]string{io.kubernetes.container.hash: a8fa7523,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1782d9d175515bd4e42253a57ef8ec1ad638c134529bd72fd9ad1187e5069d,PodSandboxId:0abe4ba3a2b305220f08cf64ce28a36efcaa9e024f221b5e6eebf0d9fca9c5aa,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701128839096056763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9b9e0c6-e0a3-493d-865c-46132cac0178,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 70977c77,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:154307453b6b0e3aaeadbe2b85ecce6ed2eb04c16bd99df39245c3301a555b53,PodSandboxId:08747a488ca2d624d87938ed89feb8899360c3df651ac6b0aaddd6b66760ce15,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701128813951153284,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-nwhkb,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3,},Annotations:map[string]string{io.kubernetes.container.hash: f2192052,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f889d816249986247027fde108ffb05448d9debd7793e55fd9e7460221d8e09,PodSandboxId:e4e5f3201616bb41a5483afc97f036a26e66586855eb8b846f6c8f97b3953bff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701128804409581760,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qrhkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cbd4294e-de09-4e8a-9fcc-7b310a6ebe7b,},Annotations:map[string]string{io.kubernetes.container.hash: 41de8939,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fbaaa46e8b80866628c005c8dfaea63dd71bd9966fc4c6d9b60031c5112c4a,PodSandboxId:0b60bd0aaa82b7968b21d1a506f10343107d05d7acc7eb91fb79e7fe4ebff52d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701128803243715308,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8sqw6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2cab50b6-80a5-40c2-962e-f8839e3b87a7,},Annotations:map[string]string{io.kubernetes.container.hash: b2d66619,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca907634657275c9e0449d5b12745ef0c452908fc3ded2940e29f5ef0c99eba,PodSandboxId:292e9dd51efa8c0ea0959f54774296e0353feabddc1a868b17e31366d7c4d821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701128765228814577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a595214-eb1e-4d62-9624-e92ddd58c303,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9df6e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e85b90c1c322a0424741076aec98b8f9a855a5b57b7c73314070c22183abc1e,PodSandboxId:92a9ee45bf574ae47e638f21fce5eed8f695895759d3349c7fc1f8150a503f99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701128764786156743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhr7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9961fec5-e7fd-4a03-aa60-1143daf1ef01,},Annotations:map[string]string{io.kubernetes.container.hash: d2d890fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4775850923fb529252cb950c175652bdb84ff3b9199b15cc362c4c6ce626137,PodSandboxId:631fb6d387370fc7d459983ffe80c3dde8520db9c7c80c4a0103deb5bf5982be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701128764022503547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-vd4q5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d07a1a9-173e-4270-b114-da5a7cde215c,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3a86cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd9ad62727b19147462f6b4d7536291af7d7874413581c1a78fe9611b44333,Pod
SandboxId:f7cdcf53aa610462d756c5976dd67599efde16a91d4f5cf8b5a18240ec895ead,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701128741016128395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f86d52eed4a2d47face47cef4b664a0,},Annotations:map[string]string{io.kubernetes.container.hash: 962fe94a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13f4b25bfa5857a487dd7d2adb2858ca9f0f32403f0042bdfdec6273c7974f7,PodSandboxId:bcd41af1921563128e8a25c90bddc72a7380
e2eacdf10e12a338050aef0ac8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701128740132192386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083bc641ee9babce93cb16db0d7898a1260475c02c7410f227c96cedce072899,PodSandboxId:960a8da454d0f403ee084c571de07f713443eee261
eecc1361176a1300c9fd22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701128739656419580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700801a1e80c9440f2986d4138a2c0044f1e8350f06d5f85333f39e34d6824e1,PodSandboxId:cbd6d8dcdecb
5855ee9cba10cc9813f89130927aa39cb1a6aa3dd9bcce92bdee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701128739523782256,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,},Annotations:map[string]string{io.kubernetes.container.hash: 730b4701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=47e420a2-f93f-4416-8092-96ed23a149b8 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.543978518Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=48088ac4-7803-49e6-8def-086faac969a0 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.544256493Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7759310d9b079979ff97c0105f43d4cef2cc82c7afcce19c6dd99be46df6afa5,Metadata:&PodSandboxMetadata{Name:hello-world-app-5f5d8b66bb-95999,Uid:39b7b876-84ac-48c3-8fb4-cc346e5b255a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128979272154531,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-95999,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39b7b876-84ac-48c3-8fb4-cc346e5b255a,pod-template-hash: 5f5d8b66bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-27T23:49:38.916239485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0abe4ba3a2b305220f08cf64ce28a36efcaa9e024f221b5e6eebf0d9fca9c5aa,Metadata:&PodSandboxMetadata{Name:nginx,Uid:d9b9e0c6-e0a3-493d-865c-46132cac0178,Namespace:defau
lt,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128834124270919,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9b9e0c6-e0a3-493d-865c-46132cac0178,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-27T23:47:13.783863753Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08747a488ca2d624d87938ed89feb8899360c3df651ac6b0aaddd6b66760ce15,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-7fcf777cb7-nwhkb,Uid:28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1701128806535081199,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-nwhkb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.
uid: 28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3,pod-template-hash: 7fcf777cb7,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-27T23:46:38.695988119Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4e5f3201616bb41a5483afc97f036a26e66586855eb8b846f6c8f97b3953bff,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-qrhkd,Uid:cbd4294e-de09-4e8a-9fcc-7b310a6ebe7b,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1701128799105574588,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: 0cb27fbf-43c5-4a6c-8e1f-e4811ea39dc9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-qrhkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cbd4294e-de09-4e8a-9fcc-7b310a6ebe7b,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-27T23:46:38.76623551
8Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b60bd0aaa82b7968b21d1a506f10343107d05d7acc7eb91fb79e7fe4ebff52d,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-8sqw6,Uid:2cab50b6-80a5-40c2-962e-f8839e3b87a7,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1701128799052115427,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,controller-uid: dbf06c40-0ee6-4f27-a0b3-63519bd0bad2,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-8sqw6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2cab50b6-80a5-40c2-962e-f8839e3b87a7,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-27T23:46:38.707308104Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:292e9dd51efa8c0ea0959f54774296e0353feabddc1a868b17e31366d7c4d821,Metadata:&PodSandbox
Metadata{Name:storage-provisioner,Uid:3a595214-eb1e-4d62-9624-e92ddd58c303,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128764813003985,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a595214-eb1e-4d62-9624-e92ddd58c303,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]
}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-27T23:46:04.475261027Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92a9ee45bf574ae47e638f21fce5eed8f695895759d3349c7fc1f8150a503f99,Metadata:&PodSandboxMetadata{Name:kube-proxy-rhr7p,Uid:9961fec5-e7fd-4a03-aa60-1143daf1ef01,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128763647806390,Labels:map[string]string{controller-revision-hash: 5bdc57b48f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rhr7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9961fec5-e7fd-4a03-aa60-1143daf1ef01,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-27T23:46:03.303155066Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:631fb6d387370fc7d459983ffe80c3dde8520db9c7c80c4a01
03deb5bf5982be,Metadata:&PodSandboxMetadata{Name:coredns-66bff467f8-vd4q5,Uid:2d07a1a9-173e-4270-b114-da5a7cde215c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128763545625747,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bff467f8-vd4q5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d07a1a9-173e-4270-b114-da5a7cde215c,k8s-app: kube-dns,pod-template-hash: 66bff467f8,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-27T23:46:03.204340427Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f7cdcf53aa610462d756c5976dd67599efde16a91d4f5cf8b5a18240ec895ead,Metadata:&PodSandboxMetadata{Name:etcd-ingress-addon-legacy-142525,Uid:1f86d52eed4a2d47face47cef4b664a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128739269453348,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ingress-addon-legacy-142525,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 1f86d52eed4a2d47face47cef4b664a0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.57:2379,kubernetes.io/config.hash: 1f86d52eed4a2d47face47cef4b664a0,kubernetes.io/config.seen: 2023-11-27T23:45:38.212506322Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bcd41af1921563128e8a25c90bddc72a7380e2eacdf10e12a338050aef0ac8ef,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ingress-addon-legacy-142525,Uid:d12e497b0008e22acbcd5a9cf2dd48ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128739244926327,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d12e497b0008e22acbcd5a9cf2dd48ac,kubernetes.i
o/config.seen: 2023-11-27T23:45:38.211094555Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:960a8da454d0f403ee084c571de07f713443eee261eecc1361176a1300c9fd22,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ingress-addon-legacy-142525,Uid:b395a1e17534e69e27827b1f8d737725,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128739221939748,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b395a1e17534e69e27827b1f8d737725,kubernetes.io/config.seen: 2023-11-27T23:45:38.209563924Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cbd6d8dcdecb5855ee9cba10cc9813f89130927aa39cb1a6aa3dd9bcce92bdee,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ingress-addon-leg
acy-142525,Uid:f9f922c35540c0fadcadcb2a82cc505c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701128739106320099,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.57:8443,kubernetes.io/config.hash: f9f922c35540c0fadcadcb2a82cc505c,kubernetes.io/config.seen: 2023-11-27T23:45:38.208736819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=48088ac4-7803-49e6-8def-086faac969a0 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.545650425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c8ac707d-af14-41ed-bb06-9836aada8ab8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.545790166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c8ac707d-af14-41ed-bb06-9836aada8ab8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.546145521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18980423a0708c67e1494afabff9148724336deb04c3d15dd2f6bc04cbfd0f76,PodSandboxId:7759310d9b079979ff97c0105f43d4cef2cc82c7afcce19c6dd99be46df6afa5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701128982835742858,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-95999,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39b7b876-84ac-48c3-8fb4-cc346e5b255a,},Annotations:map[string]string{io.kubernetes.container.hash: a8fa7523,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1782d9d175515bd4e42253a57ef8ec1ad638c134529bd72fd9ad1187e5069d,PodSandboxId:0abe4ba3a2b305220f08cf64ce28a36efcaa9e024f221b5e6eebf0d9fca9c5aa,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701128839096056763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9b9e0c6-e0a3-493d-865c-46132cac0178,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 70977c77,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:154307453b6b0e3aaeadbe2b85ecce6ed2eb04c16bd99df39245c3301a555b53,PodSandboxId:08747a488ca2d624d87938ed89feb8899360c3df651ac6b0aaddd6b66760ce15,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701128813951153284,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-nwhkb,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3,},Annotations:map[string]string{io.kubernetes.container.hash: f2192052,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f889d816249986247027fde108ffb05448d9debd7793e55fd9e7460221d8e09,PodSandboxId:e4e5f3201616bb41a5483afc97f036a26e66586855eb8b846f6c8f97b3953bff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701128804409581760,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qrhkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cbd4294e-de09-4e8a-9fcc-7b310a6ebe7b,},Annotations:map[string]string{io.kubernetes.container.hash: 41de8939,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fbaaa46e8b80866628c005c8dfaea63dd71bd9966fc4c6d9b60031c5112c4a,PodSandboxId:0b60bd0aaa82b7968b21d1a506f10343107d05d7acc7eb91fb79e7fe4ebff52d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701128803243715308,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8sqw6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2cab50b6-80a5-40c2-962e-f8839e3b87a7,},Annotations:map[string]string{io.kubernetes.container.hash: b2d66619,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca907634657275c9e0449d5b12745ef0c452908fc3ded2940e29f5ef0c99eba,PodSandboxId:292e9dd51efa8c0ea0959f54774296e0353feabddc1a868b17e31366d7c4d821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701128765228814577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a595214-eb1e-4d62-9624-e92ddd58c303,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9df6e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e85b90c1c322a0424741076aec98b8f9a855a5b57b7c73314070c22183abc1e,PodSandboxId:92a9ee45bf574ae47e638f21fce5eed8f695895759d3349c7fc1f8150a503f99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701128764786156743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhr7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9961fec5-e7fd-4a03-aa60-1143daf1ef01,},Annotations:map[string]string{io.kubernetes.container.hash: d2d890fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4775850923fb529252cb950c175652bdb84ff3b9199b15cc362c4c6ce626137,PodSandboxId:631fb6d387370fc7d459983ffe80c3dde8520db9c7c80c4a0103deb5bf5982be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701128764022503547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-vd4q5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d07a1a9-173e-4270-b114-da5a7cde215c,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3a86cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd9ad62727b19147462f6b4d7536291af7d7874413581c1a78fe9611b44333,Pod
SandboxId:f7cdcf53aa610462d756c5976dd67599efde16a91d4f5cf8b5a18240ec895ead,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701128741016128395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f86d52eed4a2d47face47cef4b664a0,},Annotations:map[string]string{io.kubernetes.container.hash: 962fe94a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13f4b25bfa5857a487dd7d2adb2858ca9f0f32403f0042bdfdec6273c7974f7,PodSandboxId:bcd41af1921563128e8a25c90bddc72a7380
e2eacdf10e12a338050aef0ac8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701128740132192386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083bc641ee9babce93cb16db0d7898a1260475c02c7410f227c96cedce072899,PodSandboxId:960a8da454d0f403ee084c571de07f713443eee261
eecc1361176a1300c9fd22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701128739656419580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700801a1e80c9440f2986d4138a2c0044f1e8350f06d5f85333f39e34d6824e1,PodSandboxId:cbd6d8dcdecb
5855ee9cba10cc9813f89130927aa39cb1a6aa3dd9bcce92bdee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701128739523782256,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,},Annotations:map[string]string{io.kubernetes.container.hash: 730b4701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c8ac707d-af14-41ed-bb06-9836aada8ab8 name=/runtime.v1alpha2.Runt
imeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.548958313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1753fde8-635e-4df5-b14d-4cad9eb7750c name=/runtime.v1.RuntimeService/Version
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.549010577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1753fde8-635e-4df5-b14d-4cad9eb7750c name=/runtime.v1.RuntimeService/Version
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.550503781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e4443343-7afd-4463-872d-a6cb58dec6fd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.551056143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701128996551041471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=e4443343-7afd-4463-872d-a6cb58dec6fd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.551554445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c8c1da05-ae92-42e2-bf0d-838cc008af8f name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.551633273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c8c1da05-ae92-42e2-bf0d-838cc008af8f name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.551937444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18980423a0708c67e1494afabff9148724336deb04c3d15dd2f6bc04cbfd0f76,PodSandboxId:7759310d9b079979ff97c0105f43d4cef2cc82c7afcce19c6dd99be46df6afa5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701128982835742858,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-95999,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39b7b876-84ac-48c3-8fb4-cc346e5b255a,},Annotations:map[string]string{io.kubernetes.container.hash: a8fa7523,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1782d9d175515bd4e42253a57ef8ec1ad638c134529bd72fd9ad1187e5069d,PodSandboxId:0abe4ba3a2b305220f08cf64ce28a36efcaa9e024f221b5e6eebf0d9fca9c5aa,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701128839096056763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9b9e0c6-e0a3-493d-865c-46132cac0178,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 70977c77,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:154307453b6b0e3aaeadbe2b85ecce6ed2eb04c16bd99df39245c3301a555b53,PodSandboxId:08747a488ca2d624d87938ed89feb8899360c3df651ac6b0aaddd6b66760ce15,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701128813951153284,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-nwhkb,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3,},Annotations:map[string]string{io.kubernetes.container.hash: f2192052,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f889d816249986247027fde108ffb05448d9debd7793e55fd9e7460221d8e09,PodSandboxId:e4e5f3201616bb41a5483afc97f036a26e66586855eb8b846f6c8f97b3953bff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701128804409581760,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qrhkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cbd4294e-de09-4e8a-9fcc-7b310a6ebe7b,},Annotations:map[string]string{io.kubernetes.container.hash: 41de8939,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fbaaa46e8b80866628c005c8dfaea63dd71bd9966fc4c6d9b60031c5112c4a,PodSandboxId:0b60bd0aaa82b7968b21d1a506f10343107d05d7acc7eb91fb79e7fe4ebff52d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701128803243715308,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8sqw6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2cab50b6-80a5-40c2-962e-f8839e3b87a7,},Annotations:map[string]string{io.kubernetes.container.hash: b2d66619,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca907634657275c9e0449d5b12745ef0c452908fc3ded2940e29f5ef0c99eba,PodSandboxId:292e9dd51efa8c0ea0959f54774296e0353feabddc1a868b17e31366d7c4d821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701128765228814577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a595214-eb1e-4d62-9624-e92ddd58c303,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9df6e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e85b90c1c322a0424741076aec98b8f9a855a5b57b7c73314070c22183abc1e,PodSandboxId:92a9ee45bf574ae47e638f21fce5eed8f695895759d3349c7fc1f8150a503f99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701128764786156743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhr7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9961fec5-e7fd-4a03-aa60-1143daf1ef01,},Annotations:map[string]string{io.kubernetes.container.hash: d2d890fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4775850923fb529252cb950c175652bdb84ff3b9199b15cc362c4c6ce626137,PodSandboxId:631fb6d387370fc7d459983ffe80c3dde8520db9c7c80c4a0103deb5bf5982be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701128764022503547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-vd4q5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d07a1a9-173e-4270-b114-da5a7cde215c,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3a86cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd9ad62727b19147462f6b4d7536291af7d7874413581c1a78fe9611b44333,Pod
SandboxId:f7cdcf53aa610462d756c5976dd67599efde16a91d4f5cf8b5a18240ec895ead,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701128741016128395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f86d52eed4a2d47face47cef4b664a0,},Annotations:map[string]string{io.kubernetes.container.hash: 962fe94a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13f4b25bfa5857a487dd7d2adb2858ca9f0f32403f0042bdfdec6273c7974f7,PodSandboxId:bcd41af1921563128e8a25c90bddc72a7380
e2eacdf10e12a338050aef0ac8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701128740132192386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083bc641ee9babce93cb16db0d7898a1260475c02c7410f227c96cedce072899,PodSandboxId:960a8da454d0f403ee084c571de07f713443eee261
eecc1361176a1300c9fd22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701128739656419580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700801a1e80c9440f2986d4138a2c0044f1e8350f06d5f85333f39e34d6824e1,PodSandboxId:cbd6d8dcdecb
5855ee9cba10cc9813f89130927aa39cb1a6aa3dd9bcce92bdee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701128739523782256,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,},Annotations:map[string]string{io.kubernetes.container.hash: 730b4701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c8c1da05-ae92-42e2-bf0d-838cc008af8f name=/runtime.v1.RuntimeSer
vice/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.584848360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=95823dd8-1733-440f-9f23-07ab111fc7ca name=/runtime.v1.RuntimeService/Version
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.584905666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=95823dd8-1733-440f-9f23-07ab111fc7ca name=/runtime.v1.RuntimeService/Version
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.585762013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ebb7d5d4-efea-418b-9a74-687c444de7e7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.586296713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701128996586282379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=ebb7d5d4-efea-418b-9a74-687c444de7e7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.586822546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=757ecbd0-2cbf-4b37-8459-a2b81bbca5d0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.586877497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=757ecbd0-2cbf-4b37-8459-a2b81bbca5d0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:49:56 ingress-addon-legacy-142525 crio[720]: time="2023-11-27 23:49:56.587140663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18980423a0708c67e1494afabff9148724336deb04c3d15dd2f6bc04cbfd0f76,PodSandboxId:7759310d9b079979ff97c0105f43d4cef2cc82c7afcce19c6dd99be46df6afa5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701128982835742858,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-95999,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 39b7b876-84ac-48c3-8fb4-cc346e5b255a,},Annotations:map[string]string{io.kubernetes.container.hash: a8fa7523,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e1782d9d175515bd4e42253a57ef8ec1ad638c134529bd72fd9ad1187e5069d,PodSandboxId:0abe4ba3a2b305220f08cf64ce28a36efcaa9e024f221b5e6eebf0d9fca9c5aa,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701128839096056763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d9b9e0c6-e0a3-493d-865c-46132cac0178,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 70977c77,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:154307453b6b0e3aaeadbe2b85ecce6ed2eb04c16bd99df39245c3301a555b53,PodSandboxId:08747a488ca2d624d87938ed89feb8899360c3df651ac6b0aaddd6b66760ce15,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701128813951153284,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-nwhkb,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3,},Annotations:map[string]string{io.kubernetes.container.hash: f2192052,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:8f889d816249986247027fde108ffb05448d9debd7793e55fd9e7460221d8e09,PodSandboxId:e4e5f3201616bb41a5483afc97f036a26e66586855eb8b846f6c8f97b3953bff,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701128804409581760,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qrhkd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cbd4294e-de09-4e8a-9fcc-7b310a6ebe7b,},Annotations:map[string]string{io.kubernetes.container.hash: 41de8939,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1fbaaa46e8b80866628c005c8dfaea63dd71bd9966fc4c6d9b60031c5112c4a,PodSandboxId:0b60bd0aaa82b7968b21d1a506f10343107d05d7acc7eb91fb79e7fe4ebff52d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701128803243715308,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8sqw6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2cab50b6-80a5-40c2-962e-f8839e3b87a7,},Annotations:map[string]string{io.kubernetes.container.hash: b2d66619,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca907634657275c9e0449d5b12745ef0c452908fc3ded2940e29f5ef0c99eba,PodSandboxId:292e9dd51efa8c0ea0959f54774296e0353feabddc1a868b17e31366d7c4d821,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701128765228814577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a595214-eb1e-4d62-9624-e92ddd58c303,},Annotations:map[string]string{io.kubernetes.container.hash: 8a9df6e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e85b90c1c322a0424741076aec98b8f9a855a5b57b7c73314070c22183abc1e,PodSandboxId:92a9ee45bf574ae47e638f21fce5eed8f695895759d3349c7fc1f8150a503f99,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701128764786156743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rhr7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9961fec5-e7fd-4a03-aa60-1143daf1ef01,},Annotations:map[string]string{io.kubernetes.container.hash: d2d890fd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4775850923fb529252cb950c175652bdb84ff3b9199b15cc362c4c6ce626137,PodSandboxId:631fb6d387370fc7d459983ffe80c3dde8520db9c7c80c4a0103deb5bf5982be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701128764022503547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-vd4q5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d07a1a9-173e-4270-b114-da5a7cde215c,},Annotations:map[string]string{io.kubernetes.container.hash: 5d3a86cc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd9ad62727b19147462f6b4d7536291af7d7874413581c1a78fe9611b44333,Pod
SandboxId:f7cdcf53aa610462d756c5976dd67599efde16a91d4f5cf8b5a18240ec895ead,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701128741016128395,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f86d52eed4a2d47face47cef4b664a0,},Annotations:map[string]string{io.kubernetes.container.hash: 962fe94a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13f4b25bfa5857a487dd7d2adb2858ca9f0f32403f0042bdfdec6273c7974f7,PodSandboxId:bcd41af1921563128e8a25c90bddc72a7380
e2eacdf10e12a338050aef0ac8ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701128740132192386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083bc641ee9babce93cb16db0d7898a1260475c02c7410f227c96cedce072899,PodSandboxId:960a8da454d0f403ee084c571de07f713443eee261
eecc1361176a1300c9fd22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701128739656419580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700801a1e80c9440f2986d4138a2c0044f1e8350f06d5f85333f39e34d6824e1,PodSandboxId:cbd6d8dcdecb
5855ee9cba10cc9813f89130927aa39cb1a6aa3dd9bcce92bdee,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701128739523782256,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-142525,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,},Annotations:map[string]string{io.kubernetes.container.hash: 730b4701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=757ecbd0-2cbf-4b37-8459-a2b81bbca5d0 name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	18980423a0708       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            13 seconds ago      Running             hello-world-app           0                   7759310d9b079       hello-world-app-5f5d8b66bb-95999
	4e1782d9d1755       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   0abe4ba3a2b30       nginx
	154307453b6b0       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   08747a488ca2d       ingress-nginx-controller-7fcf777cb7-nwhkb
	8f889d8162499       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   e4e5f3201616b       ingress-nginx-admission-patch-qrhkd
	d1fbaaa46e8b8       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   0b60bd0aaa82b       ingress-nginx-admission-create-8sqw6
	1ca9076346572       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   292e9dd51efa8       storage-provisioner
	1e85b90c1c322       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   92a9ee45bf574       kube-proxy-rhr7p
	e4775850923fb       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   631fb6d387370       coredns-66bff467f8-vd4q5
	3dfd9ad62727b       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   f7cdcf53aa610       etcd-ingress-addon-legacy-142525
	f13f4b25bfa58       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   bcd41af192156       kube-scheduler-ingress-addon-legacy-142525
	083bc641ee9ba       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   960a8da454d0f       kube-controller-manager-ingress-addon-legacy-142525
	700801a1e80c9       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   cbd6d8dcdecb5       kube-apiserver-ingress-addon-legacy-142525
	
	* 
	* ==> coredns [e4775850923fb529252cb950c175652bdb84ff3b9199b15cc362c4c6ce626137] <==
	* [INFO] 10.244.0.6:39166 - 62014 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073972s
	[INFO] 10.244.0.6:39166 - 8289 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054213s
	[INFO] 10.244.0.6:39166 - 25648 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068508s
	[INFO] 10.244.0.6:39166 - 23143 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000099373s
	[INFO] 10.244.0.6:35491 - 51358 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000125755s
	[INFO] 10.244.0.6:35491 - 26675 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000146382s
	[INFO] 10.244.0.6:35491 - 18060 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068999s
	[INFO] 10.244.0.6:35491 - 64116 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058991s
	[INFO] 10.244.0.6:35491 - 61730 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061227s
	[INFO] 10.244.0.6:35491 - 55556 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065743s
	[INFO] 10.244.0.6:35491 - 60471 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074256s
	[INFO] 10.244.0.6:55002 - 39643 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000088833s
	[INFO] 10.244.0.6:36213 - 23918 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053111s
	[INFO] 10.244.0.6:36213 - 49585 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028975s
	[INFO] 10.244.0.6:55002 - 64233 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00012722s
	[INFO] 10.244.0.6:36213 - 8404 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000182259s
	[INFO] 10.244.0.6:36213 - 17174 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041284s
	[INFO] 10.244.0.6:55002 - 21178 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036299s
	[INFO] 10.244.0.6:36213 - 47357 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005986s
	[INFO] 10.244.0.6:55002 - 54375 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057691s
	[INFO] 10.244.0.6:55002 - 53135 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069612s
	[INFO] 10.244.0.6:36213 - 43958 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000144613s
	[INFO] 10.244.0.6:36213 - 32912 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102363s
	[INFO] 10.244.0.6:55002 - 65372 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060581s
	[INFO] 10.244.0.6:55002 - 20077 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042537s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-142525
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-142525
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=ingress-addon-legacy-142525
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_45_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:45:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-142525
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:49:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:49:48 +0000   Mon, 27 Nov 2023 23:45:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:49:48 +0000   Mon, 27 Nov 2023 23:45:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:49:48 +0000   Mon, 27 Nov 2023 23:45:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:49:48 +0000   Mon, 27 Nov 2023 23:45:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ingress-addon-legacy-142525
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d00049ca9e34c85a719bc93474f5945
	  System UUID:                7d00049c-a9e3-4c85-a719-bc93474f5945
	  Boot ID:                    6924e1fc-a20e-433c-aa52-804986112423
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-95999                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-66bff467f8-vd4q5                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m54s
	  kube-system                 etcd-ingress-addon-legacy-142525                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-apiserver-ingress-addon-legacy-142525             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-142525    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-rhr7p                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-scheduler-ingress-addon-legacy-142525             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m9s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s   kubelet     Node ingress-addon-legacy-142525 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s   kubelet     Node ingress-addon-legacy-142525 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s   kubelet     Node ingress-addon-legacy-142525 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m58s  kubelet     Node ingress-addon-legacy-142525 status is now: NodeReady
	  Normal  Starting                 3m51s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov27 23:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093115] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.380778] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.351355] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145219] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.078829] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.351479] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.109990] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.148690] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.111704] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.214307] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +7.832403] systemd-fstab-generator[1030]: Ignoring "noauto" for root device
	[  +3.346992] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +8.862538] systemd-fstab-generator[1435]: Ignoring "noauto" for root device
	[Nov27 23:46] kauditd_printk_skb: 6 callbacks suppressed
	[ +33.099762] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.051913] kauditd_printk_skb: 6 callbacks suppressed
	[Nov27 23:47] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.669089] kauditd_printk_skb: 3 callbacks suppressed
	[Nov27 23:49] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [3dfd9ad62727b19147462f6b4d7536291af7d7874413581c1a78fe9611b44333] <==
	* raft2023/11/27 23:45:41 INFO: 79ee2fa200dbf73d switched to configuration voters=(8786012295892039485)
	2023-11-27 23:45:41.186452 I | etcdserver/membership: added member 79ee2fa200dbf73d [https://192.168.39.57:2380] to cluster cdb6bc6ece496785
	2023-11-27 23:45:41.191884 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-27 23:45:41.192184 I | embed: listening for peers on 192.168.39.57:2380
	2023-11-27 23:45:41.192600 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/27 23:45:41 INFO: 79ee2fa200dbf73d is starting a new election at term 1
	raft2023/11/27 23:45:41 INFO: 79ee2fa200dbf73d became candidate at term 2
	raft2023/11/27 23:45:41 INFO: 79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 2
	raft2023/11/27 23:45:41 INFO: 79ee2fa200dbf73d became leader at term 2
	raft2023/11/27 23:45:41 INFO: raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 2
	2023-11-27 23:45:41.470535 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-27 23:45:41.471932 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-27 23:45:41.471967 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-27 23:45:41.471988 I | etcdserver: published {Name:ingress-addon-legacy-142525 ClientURLs:[https://192.168.39.57:2379]} to cluster cdb6bc6ece496785
	2023-11-27 23:45:41.471992 I | embed: ready to serve client requests
	2023-11-27 23:45:41.472801 I | embed: ready to serve client requests
	2023-11-27 23:45:41.473744 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-27 23:45:41.474028 I | embed: serving client requests on 192.168.39.57:2379
	2023-11-27 23:46:03.188773 W | etcdserver: request "header:<ID:17815549714928579912 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-5bdc57b48f\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-5bdc57b48f\" value_size:2204 >> failure:<>>" with result "size:16" took too long (288.914736ms) to execute
	2023-11-27 23:46:03.195891 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (520.721926ms) to execute
	2023-11-27 23:46:03.207885 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-28629\" " with result "range_response_count:1 size:3656" took too long (533.240521ms) to execute
	2023-11-27 23:46:03.208628 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-142525\" " with result "range_response_count:1 size:6295" took too long (397.823985ms) to execute
	2023-11-27 23:46:03.216112 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (503.231346ms) to execute
	2023-11-27 23:47:24.036594 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true " with result "range_response_count:0 size:7" took too long (176.999891ms) to execute
	2023-11-27 23:47:24.036876 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2213" took too long (199.986561ms) to execute
	
	* 
	* ==> kernel <==
	*  23:49:56 up 4 min,  0 users,  load average: 0.95, 0.53, 0.24
	Linux ingress-addon-legacy-142525 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [700801a1e80c9440f2986d4138a2c0044f1e8350f06d5f85333f39e34d6824e1] <==
	* I1127 23:45:46.783077       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1127 23:45:47.536434       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1127 23:45:47.682559       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1127 23:45:48.286135       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 23:46:02.514147       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1127 23:46:02.666379       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1127 23:46:03.190169       1 trace.go:116] Trace[1240298845]: "Create" url:/apis/apps/v1/namespaces/kube-system/controllerrevisions,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:daemon-set-controller,client:192.168.39.57 (started: 2023-11-27 23:46:02.664352459 +0000 UTC m=+22.932438472) (total time: 525.781933ms):
	Trace[1240298845]: [525.741823ms] [525.678706ms] Object stored in database
	I1127 23:46:03.200099       1 trace.go:116] Trace[2099373873]: "Create" url:/api/v1/namespaces/kube-system/endpoints,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:endpoint-controller,client:192.168.39.57 (started: 2023-11-27 23:46:02.65589374 +0000 UTC m=+22.923979758) (total time: 544.106092ms):
	Trace[2099373873]: [544.001282ms] [542.070596ms] Object stored in database
	I1127 23:46:03.218269       1 trace.go:116] Trace[569124941]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.18.20 (linux/amd64) kubernetes/1f3e19b,client:127.0.0.1 (started: 2023-11-27 23:46:02.712136426 +0000 UTC m=+22.980222437) (total time: 506.110398ms):
	Trace[569124941]: [505.786904ms] [505.778092ms] About to write a response
	I1127 23:46:03.218449       1 trace.go:116] Trace[1961343418]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2023-11-27 23:46:02.641441708 +0000 UTC m=+22.909527742) (total time: 576.853357ms):
	Trace[1961343418]: [576.645693ms] [576.570937ms] Transaction committed
	I1127 23:46:03.218653       1 trace.go:116] Trace[976196881]: "Create" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-vd4q5/binding,user-agent:kube-scheduler/v1.18.20 (linux/amd64) kubernetes/1f3e19b/scheduler,client:192.168.39.57 (started: 2023-11-27 23:46:02.620210945 +0000 UTC m=+22.888296957) (total time: 598.425715ms):
	Trace[976196881]: [598.343803ms] [597.366595ms] Object stored in database
	I1127 23:46:03.222003       1 trace.go:116] Trace[1624295771]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2023-11-27 23:46:02.683296335 +0000 UTC m=+22.951382368) (total time: 538.688685ms):
	Trace[1624295771]: [538.232998ms] [537.034857ms] Transaction committed
	I1127 23:46:03.222356       1 trace.go:116] Trace[224357682]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:deployment-controller,client:192.168.39.57 (started: 2023-11-27 23:46:02.68310965 +0000 UTC m=+22.951195675) (total time: 539.228234ms):
	Trace[224357682]: [539.014592ms] [538.892519ms] Object stored in database
	I1127 23:46:03.240905       1 trace.go:116] Trace[935049568]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-28629,user-agent:kubelet/v1.18.20 (linux/amd64) kubernetes/1f3e19b,client:192.168.39.57 (started: 2023-11-27 23:46:02.635916007 +0000 UTC m=+22.904002022) (total time: 590.694286ms):
	Trace[935049568]: [590.626862ms] [590.620978ms] About to write a response
	I1127 23:46:38.658093       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1127 23:47:13.608267       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1127 23:49:49.084955       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [083bc641ee9babce93cb16db0d7898a1260475c02c7410f227c96cedce072899] <==
	* I1127 23:46:02.567638       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1127 23:46:02.570615       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1127 23:46:02.570734       1 shared_informer.go:230] Caches are synced for disruption 
	I1127 23:46:02.570755       1 disruption.go:339] Sending events to api server.
	I1127 23:46:02.590983       1 shared_informer.go:230] Caches are synced for attach detach 
	I1127 23:46:02.651295       1 shared_informer.go:230] Caches are synced for endpoint 
	I1127 23:46:02.679768       1 shared_informer.go:230] Caches are synced for resource quota 
	I1127 23:46:02.680912       1 shared_informer.go:230] Caches are synced for resource quota 
	I1127 23:46:02.809504       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1127 23:46:02.809544       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1127 23:46:03.231739       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"4c39cd9e-1bd8-4d2b-a5fc-39c1b58ef6a7", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-rhr7p
	I1127 23:46:03.235953       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1127 23:46:03.236012       1 shared_informer.go:230] Caches are synced for garbage collector 
	E1127 23:46:03.300262       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4c39cd9e-1bd8-4d2b-a5fc-39c1b58ef6a7", ResourceVersion:"212", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63836725547, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001632740), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc0016327a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001632800), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001120880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0xc001632860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0016328c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001632980)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0016b2b40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00148ddd8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000254ee0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0001e68d8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00148de28)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1127 23:46:03.356153       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"8c24e313-7cd6-42d4-8cdf-9b670819d17c", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1127 23:46:03.404478       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"c866fae0-0383-4974-ad16-cae504a81232", APIVersion:"apps/v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-28629
	I1127 23:46:38.654171       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ef1399cf-30d3-48ef-9bf9-40bde72da219", APIVersion:"apps/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1127 23:46:38.674841       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"405fa6ee-28d3-4734-a6cc-f4216018c9a9", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-nwhkb
	I1127 23:46:38.697074       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"dbf06c40-0ee6-4f27-a0b3-63519bd0bad2", APIVersion:"batch/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-8sqw6
	I1127 23:46:38.760861       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0cb27fbf-43c5-4a6c-8e1f-e4811ea39dc9", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-qrhkd
	I1127 23:46:44.384489       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"dbf06c40-0ee6-4f27-a0b3-63519bd0bad2", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 23:46:45.385094       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0cb27fbf-43c5-4a6c-8e1f-e4811ea39dc9", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 23:49:38.882175       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"e355bf15-f995-4c22-a0f4-06f41f577938", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1127 23:49:38.899399       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"b8131174-3841-457a-82e6-4ad85dd2c26f", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-95999
	
	* 
	* ==> kube-proxy [1e85b90c1c322a0424741076aec98b8f9a855a5b57b7c73314070c22183abc1e] <==
	* W1127 23:46:05.013369       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1127 23:46:05.022424       1 node.go:136] Successfully retrieved node IP: 192.168.39.57
	I1127 23:46:05.022473       1 server_others.go:186] Using iptables Proxier.
	I1127 23:46:05.022799       1 server.go:583] Version: v1.18.20
	I1127 23:46:05.031117       1 config.go:315] Starting service config controller
	I1127 23:46:05.031162       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1127 23:46:05.031188       1 config.go:133] Starting endpoints config controller
	I1127 23:46:05.031197       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1127 23:46:05.131840       1 shared_informer.go:230] Caches are synced for service config 
	I1127 23:46:05.131991       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [f13f4b25bfa5857a487dd7d2adb2858ca9f0f32403f0042bdfdec6273c7974f7] <==
	* I1127 23:45:44.531949       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 23:45:44.533481       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1127 23:45:44.533849       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:45:44.533938       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:45:44.533965       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1127 23:45:44.536802       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:45:44.536993       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:45:44.537167       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:45:44.537378       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:45:44.537626       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:45:44.537922       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:45:44.538183       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:45:44.538474       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 23:45:44.538860       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:45:44.539091       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:45:44.539402       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 23:45:44.539751       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:45:45.363639       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:45:45.419317       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:45:45.460897       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:45:45.590388       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:45:45.674292       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 23:45:45.698855       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1127 23:45:48.134227       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1127 23:46:02.607315       1 factory.go:503] pod: kube-system/coredns-66bff467f8-vd4q5 is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-11-27 23:45:15 UTC, ends at Mon 2023-11-27 23:49:57 UTC. --
	Nov 27 23:46:55 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:46:55.420766    1442 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 27 23:46:55 ingress-addon-legacy-142525 kubelet[1442]: E1127 23:46:55.422748    1442 reflector.go:178] object-"kube-system"/"minikube-ingress-dns-token-6gxk7": Failed to list *v1.Secret: secrets "minikube-ingress-dns-token-6gxk7" is forbidden: User "system:node:ingress-addon-legacy-142525" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "ingress-addon-legacy-142525" and this object
	Nov 27 23:46:55 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:46:55.584913    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-6gxk7" (UniqueName: "kubernetes.io/secret/ebbb1f28-2ef5-438a-b333-efb9ca235177-minikube-ingress-dns-token-6gxk7") pod "kube-ingress-dns-minikube" (UID: "ebbb1f28-2ef5-438a-b333-efb9ca235177")
	Nov 27 23:46:56 ingress-addon-legacy-142525 kubelet[1442]: E1127 23:46:56.685642    1442 secret.go:195] Couldn't get secret kube-system/minikube-ingress-dns-token-6gxk7: failed to sync secret cache: timed out waiting for the condition
	Nov 27 23:46:56 ingress-addon-legacy-142525 kubelet[1442]: E1127 23:46:56.685844    1442 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/ebbb1f28-2ef5-438a-b333-efb9ca235177-minikube-ingress-dns-token-6gxk7 podName:ebbb1f28-2ef5-438a-b333-efb9ca235177 nodeName:}" failed. No retries permitted until 2023-11-27 23:46:57.185817824 +0000 UTC m=+69.695044515 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"minikube-ingress-dns-token-6gxk7\" (UniqueName: \"kubernetes.io/secret/ebbb1f28-2ef5-438a-b333-efb9ca235177-minikube-ingress-dns-token-6gxk7\") pod \"kube-ingress-dns-minikube\" (UID: \"ebbb1f28-2ef5-438a-b333-efb9ca235177\") : failed to sync secret cache: timed out waiting for the condition"
	Nov 27 23:47:13 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:47:13.784160    1442 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 27 23:47:13 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:47:13.951639    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-b7xlg" (UniqueName: "kubernetes.io/secret/d9b9e0c6-e0a3-493d-865c-46132cac0178-default-token-b7xlg") pod "nginx" (UID: "d9b9e0c6-e0a3-493d-865c-46132cac0178")
	Nov 27 23:49:38 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:38.916337    1442 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 27 23:49:39 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:39.044328    1442 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-b7xlg" (UniqueName: "kubernetes.io/secret/39b7b876-84ac-48c3-8fb4-cc346e5b255a-default-token-b7xlg") pod "hello-world-app-5f5d8b66bb-95999" (UID: "39b7b876-84ac-48c3-8fb4-cc346e5b255a")
	Nov 27 23:49:40 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:40.461397    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0aed75e6ea62fd63f83b20bb2ca9622b63071f1951441f5f2491a1268c559a4f
	Nov 27 23:49:40 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:40.494855    1442 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0aed75e6ea62fd63f83b20bb2ca9622b63071f1951441f5f2491a1268c559a4f
	Nov 27 23:49:40 ingress-addon-legacy-142525 kubelet[1442]: E1127 23:49:40.495281    1442 remote_runtime.go:295] ContainerStatus "0aed75e6ea62fd63f83b20bb2ca9622b63071f1951441f5f2491a1268c559a4f" from runtime service failed: rpc error: code = NotFound desc = could not find container "0aed75e6ea62fd63f83b20bb2ca9622b63071f1951441f5f2491a1268c559a4f": container with ID starting with 0aed75e6ea62fd63f83b20bb2ca9622b63071f1951441f5f2491a1268c559a4f not found: ID does not exist
	Nov 27 23:49:40 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:40.550477    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-6gxk7" (UniqueName: "kubernetes.io/secret/ebbb1f28-2ef5-438a-b333-efb9ca235177-minikube-ingress-dns-token-6gxk7") pod "ebbb1f28-2ef5-438a-b333-efb9ca235177" (UID: "ebbb1f28-2ef5-438a-b333-efb9ca235177")
	Nov 27 23:49:40 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:40.563944    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebbb1f28-2ef5-438a-b333-efb9ca235177-minikube-ingress-dns-token-6gxk7" (OuterVolumeSpecName: "minikube-ingress-dns-token-6gxk7") pod "ebbb1f28-2ef5-438a-b333-efb9ca235177" (UID: "ebbb1f28-2ef5-438a-b333-efb9ca235177"). InnerVolumeSpecName "minikube-ingress-dns-token-6gxk7". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:49:40 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:40.650828    1442 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-6gxk7" (UniqueName: "kubernetes.io/secret/ebbb1f28-2ef5-438a-b333-efb9ca235177-minikube-ingress-dns-token-6gxk7") on node "ingress-addon-legacy-142525" DevicePath ""
	Nov 27 23:49:49 ingress-addon-legacy-142525 kubelet[1442]: E1127 23:49:49.075874    1442 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nwhkb.179b9fcd76878f41", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nwhkb", UID:"28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3", APIVersion:"v1", ResourceVersion:"473", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-142525"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15168a74410ad41, ext:241577428471, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15168a74410ad41, ext:241577428471, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nwhkb.179b9fcd76878f41" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 23:49:49 ingress-addon-legacy-142525 kubelet[1442]: E1127 23:49:49.103556    1442 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nwhkb.179b9fcd76878f41", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nwhkb", UID:"28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3", APIVersion:"v1", ResourceVersion:"473", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-142525"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15168a74410ad41, ext:241577428471, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15168a745aff5b8, ext:241604644460, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nwhkb.179b9fcd76878f41" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 23:49:51 ingress-addon-legacy-142525 kubelet[1442]: W1127 23:49:51.527632    1442 pod_container_deletor.go:77] Container "08747a488ca2d624d87938ed89feb8899360c3df651ac6b0aaddd6b66760ce15" not found in pod's containers
	Nov 27 23:49:53 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:53.191641    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-w4g7j" (UniqueName: "kubernetes.io/secret/28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3-ingress-nginx-token-w4g7j") pod "28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3" (UID: "28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3")
	Nov 27 23:49:53 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:53.191771    1442 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3-webhook-cert") pod "28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3" (UID: "28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3")
	Nov 27 23:49:53 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:53.201223    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3" (UID: "28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:49:53 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:53.201565    1442 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3-ingress-nginx-token-w4g7j" (OuterVolumeSpecName: "ingress-nginx-token-w4g7j") pod "28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3" (UID: "28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3"). InnerVolumeSpecName "ingress-nginx-token-w4g7j". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:49:53 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:53.292175    1442 reconciler.go:319] Volume detached for volume "ingress-nginx-token-w4g7j" (UniqueName: "kubernetes.io/secret/28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3-ingress-nginx-token-w4g7j") on node "ingress-addon-legacy-142525" DevicePath ""
	Nov 27 23:49:53 ingress-addon-legacy-142525 kubelet[1442]: I1127 23:49:53.292235    1442 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3-webhook-cert") on node "ingress-addon-legacy-142525" DevicePath ""
	Nov 27 23:49:54 ingress-addon-legacy-142525 kubelet[1442]: W1127 23:49:54.143995    1442 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/28f667f1-4517-4ec6-8cc3-4c6bd25ba4d3/volumes" does not exist
	
	* 
	* ==> storage-provisioner [1ca907634657275c9e0449d5b12745ef0c452908fc3ded2940e29f5ef0c99eba] <==
	* I1127 23:46:05.358816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 23:46:05.366998       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 23:46:05.367621       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 23:46:05.375540       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 23:46:05.376241       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-142525_569f8bf6-d433-49ea-9beb-eade4655c8d7!
	I1127 23:46:05.382015       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a0c0f59-4fe0-47a6-a07b-2d3261a7b6fb", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-142525_569f8bf6-d433-49ea-9beb-eade4655c8d7 became leader
	I1127 23:46:05.477186       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-142525_569f8bf6-d433-49ea-9beb-eade4655c8d7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-142525 -n ingress-addon-legacy-142525
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-142525 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-9qz8x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-9qz8x -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-9qz8x -- sh -c "ping -c 1 192.168.39.1": exit status 1 (210.974142ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-9qz8x): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-lgwvm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-lgwvm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-lgwvm -- sh -c "ping -c 1 192.168.39.1": exit status 1 (183.63201ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-lgwvm): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-883509 -n multinode-883509
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-883509 logs -n 25: (1.334407389s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-279495 ssh -- ls                    | mount-start-2-279495 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-279495 ssh --                       | mount-start-2-279495 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-279495                           | mount-start-2-279495 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:53 UTC |
	| start   | -p mount-start-2-279495                           | mount-start-2-279495 | jenkins | v1.32.0 | 27 Nov 23 23:53 UTC | 27 Nov 23 23:54 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-279495 | jenkins | v1.32.0 | 27 Nov 23 23:54 UTC |                     |
	|         | --profile mount-start-2-279495                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-279495 ssh -- ls                    | mount-start-2-279495 | jenkins | v1.32.0 | 27 Nov 23 23:54 UTC | 27 Nov 23 23:54 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-279495 ssh --                       | mount-start-2-279495 | jenkins | v1.32.0 | 27 Nov 23 23:54 UTC | 27 Nov 23 23:54 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-279495                           | mount-start-2-279495 | jenkins | v1.32.0 | 27 Nov 23 23:54 UTC | 27 Nov 23 23:54 UTC |
	| delete  | -p mount-start-1-266908                           | mount-start-1-266908 | jenkins | v1.32.0 | 27 Nov 23 23:54 UTC | 27 Nov 23 23:54 UTC |
	| start   | -p multinode-883509                               | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:54 UTC | 27 Nov 23 23:55 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- apply -f                   | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:55 UTC | 27 Nov 23 23:55 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- rollout                    | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:55 UTC | 27 Nov 23 23:56 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- get pods -o                | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- get pods -o                | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-9qz8x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-lgwvm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-9qz8x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-lgwvm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-9qz8x -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-lgwvm -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- get pods -o                | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-9qz8x                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC |                     |
	|         | busybox-5bc68d56bd-9qz8x -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | busybox-5bc68d56bd-lgwvm                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-883509 -- exec                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC |                     |
	|         | busybox-5bc68d56bd-lgwvm -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:54:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:54:08.940610   25147 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:54:08.940955   25147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:54:08.940968   25147 out.go:309] Setting ErrFile to fd 2...
	I1127 23:54:08.940974   25147 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:54:08.941208   25147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1127 23:54:08.941937   25147 out.go:303] Setting JSON to false
	I1127 23:54:08.942969   25147 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2196,"bootTime":1701127053,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:54:08.943035   25147 start.go:138] virtualization: kvm guest
	I1127 23:54:08.945295   25147 out.go:177] * [multinode-883509] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:54:08.947150   25147 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:54:08.947162   25147 notify.go:220] Checking for updates...
	I1127 23:54:08.948566   25147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:54:08.950105   25147 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:54:08.951414   25147 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:54:08.952823   25147 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:54:08.954173   25147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:54:08.955892   25147 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:54:08.992247   25147 out.go:177] * Using the kvm2 driver based on user configuration
	I1127 23:54:08.993684   25147 start.go:298] selected driver: kvm2
	I1127 23:54:08.993700   25147 start.go:902] validating driver "kvm2" against <nil>
	I1127 23:54:08.993711   25147 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:54:08.994390   25147 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:54:08.994463   25147 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 23:54:09.008867   25147 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 23:54:09.008916   25147 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:54:09.009130   25147 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:54:09.009207   25147 cni.go:84] Creating CNI manager for ""
	I1127 23:54:09.009221   25147 cni.go:136] 0 nodes found, recommending kindnet
	I1127 23:54:09.009228   25147 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1127 23:54:09.009239   25147 start_flags.go:323] config:
	{Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:54:09.009365   25147 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:54:09.011848   25147 out.go:177] * Starting control plane node multinode-883509 in cluster multinode-883509
	I1127 23:54:09.013398   25147 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:54:09.013427   25147 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:54:09.013436   25147 cache.go:56] Caching tarball of preloaded images
	I1127 23:54:09.013503   25147 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 23:54:09.013516   25147 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:54:09.013811   25147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1127 23:54:09.013832   25147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json: {Name:mkb7c5bac1a4223a706a97319d319337b6747abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:09.013964   25147 start.go:365] acquiring machines lock for multinode-883509: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1127 23:54:09.013993   25147 start.go:369] acquired machines lock for "multinode-883509" in 17.032µs
	I1127 23:54:09.014009   25147 start.go:93] Provisioning new machine with config: &{Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:54:09.014077   25147 start.go:125] createHost starting for "" (driver="kvm2")
	I1127 23:54:09.016073   25147 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1127 23:54:09.016195   25147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:54:09.016231   25147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:54:09.029938   25147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I1127 23:54:09.030369   25147 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:54:09.030895   25147 main.go:141] libmachine: Using API Version  1
	I1127 23:54:09.030919   25147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:54:09.031291   25147 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:54:09.031500   25147 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1127 23:54:09.031639   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:54:09.031797   25147 start.go:159] libmachine.API.Create for "multinode-883509" (driver="kvm2")
	I1127 23:54:09.031825   25147 client.go:168] LocalClient.Create starting
	I1127 23:54:09.031853   25147 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem
	I1127 23:54:09.031887   25147 main.go:141] libmachine: Decoding PEM data...
	I1127 23:54:09.031904   25147 main.go:141] libmachine: Parsing certificate...
	I1127 23:54:09.032300   25147 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem
	I1127 23:54:09.032340   25147 main.go:141] libmachine: Decoding PEM data...
	I1127 23:54:09.032359   25147 main.go:141] libmachine: Parsing certificate...
	I1127 23:54:09.032387   25147 main.go:141] libmachine: Running pre-create checks...
	I1127 23:54:09.032407   25147 main.go:141] libmachine: (multinode-883509) Calling .PreCreateCheck
	I1127 23:54:09.033597   25147 main.go:141] libmachine: (multinode-883509) Calling .GetConfigRaw
	I1127 23:54:09.034064   25147 main.go:141] libmachine: Creating machine...
	I1127 23:54:09.034087   25147 main.go:141] libmachine: (multinode-883509) Calling .Create
	I1127 23:54:09.034219   25147 main.go:141] libmachine: (multinode-883509) Creating KVM machine...
	I1127 23:54:09.035423   25147 main.go:141] libmachine: (multinode-883509) DBG | found existing default KVM network
	I1127 23:54:09.036217   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:09.036072   25169 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d210}
	I1127 23:54:09.041309   25147 main.go:141] libmachine: (multinode-883509) DBG | trying to create private KVM network mk-multinode-883509 192.168.39.0/24...
	I1127 23:54:09.108749   25147 main.go:141] libmachine: (multinode-883509) Setting up store path in /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509 ...
	I1127 23:54:09.108802   25147 main.go:141] libmachine: (multinode-883509) DBG | private KVM network mk-multinode-883509 192.168.39.0/24 created
	I1127 23:54:09.108816   25147 main.go:141] libmachine: (multinode-883509) Building disk image from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso
	I1127 23:54:09.108850   25147 main.go:141] libmachine: (multinode-883509) Downloading /home/jenkins/minikube-integration/17206-4749/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso...
	I1127 23:54:09.108874   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:09.108683   25169 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:54:09.312286   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:09.312157   25169 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa...
	I1127 23:54:09.446406   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:09.446278   25169 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/multinode-883509.rawdisk...
	I1127 23:54:09.446434   25147 main.go:141] libmachine: (multinode-883509) DBG | Writing magic tar header
	I1127 23:54:09.446451   25147 main.go:141] libmachine: (multinode-883509) DBG | Writing SSH key tar header
	I1127 23:54:09.446460   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:09.446394   25169 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509 ...
	I1127 23:54:09.446475   25147 main.go:141] libmachine: (multinode-883509) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509
	I1127 23:54:09.446509   25147 main.go:141] libmachine: (multinode-883509) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines
	I1127 23:54:09.446525   25147 main.go:141] libmachine: (multinode-883509) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:54:09.446538   25147 main.go:141] libmachine: (multinode-883509) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509 (perms=drwx------)
	I1127 23:54:09.446548   25147 main.go:141] libmachine: (multinode-883509) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749
	I1127 23:54:09.446559   25147 main.go:141] libmachine: (multinode-883509) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines (perms=drwxr-xr-x)
	I1127 23:54:09.446570   25147 main.go:141] libmachine: (multinode-883509) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube (perms=drwxr-xr-x)
	I1127 23:54:09.446580   25147 main.go:141] libmachine: (multinode-883509) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749 (perms=drwxrwxr-x)
	I1127 23:54:09.446588   25147 main.go:141] libmachine: (multinode-883509) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1127 23:54:09.446598   25147 main.go:141] libmachine: (multinode-883509) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1127 23:54:09.446612   25147 main.go:141] libmachine: (multinode-883509) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1127 23:54:09.446629   25147 main.go:141] libmachine: (multinode-883509) Creating domain...
	I1127 23:54:09.446650   25147 main.go:141] libmachine: (multinode-883509) DBG | Checking permissions on dir: /home/jenkins
	I1127 23:54:09.446661   25147 main.go:141] libmachine: (multinode-883509) DBG | Checking permissions on dir: /home
	I1127 23:54:09.446674   25147 main.go:141] libmachine: (multinode-883509) DBG | Skipping /home - not owner
	I1127 23:54:09.447834   25147 main.go:141] libmachine: (multinode-883509) define libvirt domain using xml: 
	I1127 23:54:09.447861   25147 main.go:141] libmachine: (multinode-883509) <domain type='kvm'>
	I1127 23:54:09.447874   25147 main.go:141] libmachine: (multinode-883509)   <name>multinode-883509</name>
	I1127 23:54:09.447884   25147 main.go:141] libmachine: (multinode-883509)   <memory unit='MiB'>2200</memory>
	I1127 23:54:09.447897   25147 main.go:141] libmachine: (multinode-883509)   <vcpu>2</vcpu>
	I1127 23:54:09.447907   25147 main.go:141] libmachine: (multinode-883509)   <features>
	I1127 23:54:09.447919   25147 main.go:141] libmachine: (multinode-883509)     <acpi/>
	I1127 23:54:09.447932   25147 main.go:141] libmachine: (multinode-883509)     <apic/>
	I1127 23:54:09.447954   25147 main.go:141] libmachine: (multinode-883509)     <pae/>
	I1127 23:54:09.447976   25147 main.go:141] libmachine: (multinode-883509)     
	I1127 23:54:09.448009   25147 main.go:141] libmachine: (multinode-883509)   </features>
	I1127 23:54:09.448035   25147 main.go:141] libmachine: (multinode-883509)   <cpu mode='host-passthrough'>
	I1127 23:54:09.448050   25147 main.go:141] libmachine: (multinode-883509)   
	I1127 23:54:09.448064   25147 main.go:141] libmachine: (multinode-883509)   </cpu>
	I1127 23:54:09.448078   25147 main.go:141] libmachine: (multinode-883509)   <os>
	I1127 23:54:09.448092   25147 main.go:141] libmachine: (multinode-883509)     <type>hvm</type>
	I1127 23:54:09.448101   25147 main.go:141] libmachine: (multinode-883509)     <boot dev='cdrom'/>
	I1127 23:54:09.448106   25147 main.go:141] libmachine: (multinode-883509)     <boot dev='hd'/>
	I1127 23:54:09.448113   25147 main.go:141] libmachine: (multinode-883509)     <bootmenu enable='no'/>
	I1127 23:54:09.448125   25147 main.go:141] libmachine: (multinode-883509)   </os>
	I1127 23:54:09.448138   25147 main.go:141] libmachine: (multinode-883509)   <devices>
	I1127 23:54:09.448155   25147 main.go:141] libmachine: (multinode-883509)     <disk type='file' device='cdrom'>
	I1127 23:54:09.448177   25147 main.go:141] libmachine: (multinode-883509)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/boot2docker.iso'/>
	I1127 23:54:09.448188   25147 main.go:141] libmachine: (multinode-883509)       <target dev='hdc' bus='scsi'/>
	I1127 23:54:09.448194   25147 main.go:141] libmachine: (multinode-883509)       <readonly/>
	I1127 23:54:09.448202   25147 main.go:141] libmachine: (multinode-883509)     </disk>
	I1127 23:54:09.448211   25147 main.go:141] libmachine: (multinode-883509)     <disk type='file' device='disk'>
	I1127 23:54:09.448227   25147 main.go:141] libmachine: (multinode-883509)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1127 23:54:09.448246   25147 main.go:141] libmachine: (multinode-883509)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/multinode-883509.rawdisk'/>
	I1127 23:54:09.448260   25147 main.go:141] libmachine: (multinode-883509)       <target dev='hda' bus='virtio'/>
	I1127 23:54:09.448272   25147 main.go:141] libmachine: (multinode-883509)     </disk>
	I1127 23:54:09.448284   25147 main.go:141] libmachine: (multinode-883509)     <interface type='network'>
	I1127 23:54:09.448293   25147 main.go:141] libmachine: (multinode-883509)       <source network='mk-multinode-883509'/>
	I1127 23:54:09.448299   25147 main.go:141] libmachine: (multinode-883509)       <model type='virtio'/>
	I1127 23:54:09.448304   25147 main.go:141] libmachine: (multinode-883509)     </interface>
	I1127 23:54:09.448312   25147 main.go:141] libmachine: (multinode-883509)     <interface type='network'>
	I1127 23:54:09.448321   25147 main.go:141] libmachine: (multinode-883509)       <source network='default'/>
	I1127 23:54:09.448339   25147 main.go:141] libmachine: (multinode-883509)       <model type='virtio'/>
	I1127 23:54:09.448357   25147 main.go:141] libmachine: (multinode-883509)     </interface>
	I1127 23:54:09.448372   25147 main.go:141] libmachine: (multinode-883509)     <serial type='pty'>
	I1127 23:54:09.448384   25147 main.go:141] libmachine: (multinode-883509)       <target port='0'/>
	I1127 23:54:09.448397   25147 main.go:141] libmachine: (multinode-883509)     </serial>
	I1127 23:54:09.448409   25147 main.go:141] libmachine: (multinode-883509)     <console type='pty'>
	I1127 23:54:09.448423   25147 main.go:141] libmachine: (multinode-883509)       <target type='serial' port='0'/>
	I1127 23:54:09.448440   25147 main.go:141] libmachine: (multinode-883509)     </console>
	I1127 23:54:09.448458   25147 main.go:141] libmachine: (multinode-883509)     <rng model='virtio'>
	I1127 23:54:09.448479   25147 main.go:141] libmachine: (multinode-883509)       <backend model='random'>/dev/random</backend>
	I1127 23:54:09.448492   25147 main.go:141] libmachine: (multinode-883509)     </rng>
	I1127 23:54:09.448503   25147 main.go:141] libmachine: (multinode-883509)     
	I1127 23:54:09.448522   25147 main.go:141] libmachine: (multinode-883509)     
	I1127 23:54:09.448538   25147 main.go:141] libmachine: (multinode-883509)   </devices>
	I1127 23:54:09.448551   25147 main.go:141] libmachine: (multinode-883509) </domain>
	I1127 23:54:09.448559   25147 main.go:141] libmachine: (multinode-883509) 
	I1127 23:54:09.452692   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:f6:09:cb in network default
	I1127 23:54:09.453246   25147 main.go:141] libmachine: (multinode-883509) Ensuring networks are active...
	I1127 23:54:09.453275   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:09.453807   25147 main.go:141] libmachine: (multinode-883509) Ensuring network default is active
	I1127 23:54:09.454056   25147 main.go:141] libmachine: (multinode-883509) Ensuring network mk-multinode-883509 is active
	I1127 23:54:09.454481   25147 main.go:141] libmachine: (multinode-883509) Getting domain xml...
	I1127 23:54:09.455152   25147 main.go:141] libmachine: (multinode-883509) Creating domain...
	I1127 23:54:10.673924   25147 main.go:141] libmachine: (multinode-883509) Waiting to get IP...
	I1127 23:54:10.674659   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:10.675105   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:10.675135   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:10.675062   25169 retry.go:31] will retry after 239.396185ms: waiting for machine to come up
	I1127 23:54:10.916560   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:10.916986   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:10.917014   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:10.916922   25169 retry.go:31] will retry after 286.778968ms: waiting for machine to come up
	I1127 23:54:11.206560   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:11.206904   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:11.206952   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:11.206872   25169 retry.go:31] will retry after 306.615219ms: waiting for machine to come up
	I1127 23:54:11.515380   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:11.515786   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:11.515822   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:11.515733   25169 retry.go:31] will retry after 599.991163ms: waiting for machine to come up
	I1127 23:54:12.117510   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:12.117967   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:12.117999   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:12.117921   25169 retry.go:31] will retry after 721.132518ms: waiting for machine to come up
	I1127 23:54:12.840287   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:12.841013   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:12.841044   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:12.840976   25169 retry.go:31] will retry after 873.566717ms: waiting for machine to come up
	I1127 23:54:13.715985   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:13.716601   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:13.716635   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:13.716539   25169 retry.go:31] will retry after 959.145233ms: waiting for machine to come up
	I1127 23:54:14.677041   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:14.677446   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:14.677470   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:14.677414   25169 retry.go:31] will retry after 1.248592585s: waiting for machine to come up
	I1127 23:54:15.927935   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:15.928367   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:15.928399   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:15.928316   25169 retry.go:31] will retry after 1.221720345s: waiting for machine to come up
	I1127 23:54:17.151636   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:17.152079   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:17.152104   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:17.152042   25169 retry.go:31] will retry after 1.930020811s: waiting for machine to come up
	I1127 23:54:19.083856   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:19.084345   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:19.084374   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:19.084301   25169 retry.go:31] will retry after 2.471166261s: waiting for machine to come up
	I1127 23:54:21.559029   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:21.559452   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:21.559500   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:21.559387   25169 retry.go:31] will retry after 3.37332809s: waiting for machine to come up
	I1127 23:54:24.934305   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:24.934696   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:24.934730   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:24.934631   25169 retry.go:31] will retry after 3.685515699s: waiting for machine to come up
	I1127 23:54:28.624510   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:28.624999   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1127 23:54:28.625025   25147 main.go:141] libmachine: (multinode-883509) DBG | I1127 23:54:28.624967   25169 retry.go:31] will retry after 3.675469115s: waiting for machine to come up
	I1127 23:54:32.301724   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.302138   25147 main.go:141] libmachine: (multinode-883509) Found IP for machine: 192.168.39.159
	I1127 23:54:32.302164   25147 main.go:141] libmachine: (multinode-883509) Reserving static IP address...
	I1127 23:54:32.302197   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has current primary IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.302528   25147 main.go:141] libmachine: (multinode-883509) DBG | unable to find host DHCP lease matching {name: "multinode-883509", mac: "52:54:00:e1:08:02", ip: "192.168.39.159"} in network mk-multinode-883509
	I1127 23:54:32.371767   25147 main.go:141] libmachine: (multinode-883509) DBG | Getting to WaitForSSH function...
	I1127 23:54:32.371794   25147 main.go:141] libmachine: (multinode-883509) Reserved static IP address: 192.168.39.159
	I1127 23:54:32.371807   25147 main.go:141] libmachine: (multinode-883509) Waiting for SSH to be available...
	I1127 23:54:32.374171   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.374474   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:32.374504   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.374622   25147 main.go:141] libmachine: (multinode-883509) DBG | Using SSH client type: external
	I1127 23:54:32.374652   25147 main.go:141] libmachine: (multinode-883509) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa (-rw-------)
	I1127 23:54:32.374687   25147 main.go:141] libmachine: (multinode-883509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1127 23:54:32.374702   25147 main.go:141] libmachine: (multinode-883509) DBG | About to run SSH command:
	I1127 23:54:32.374714   25147 main.go:141] libmachine: (multinode-883509) DBG | exit 0
	I1127 23:54:32.472112   25147 main.go:141] libmachine: (multinode-883509) DBG | SSH cmd err, output: <nil>: 
	I1127 23:54:32.472337   25147 main.go:141] libmachine: (multinode-883509) KVM machine creation complete!
	I1127 23:54:32.472710   25147 main.go:141] libmachine: (multinode-883509) Calling .GetConfigRaw
	I1127 23:54:32.473246   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:54:32.473414   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:54:32.473572   25147 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1127 23:54:32.473592   25147 main.go:141] libmachine: (multinode-883509) Calling .GetState
	I1127 23:54:32.475078   25147 main.go:141] libmachine: Detecting operating system of created instance...
	I1127 23:54:32.475094   25147 main.go:141] libmachine: Waiting for SSH to be available...
	I1127 23:54:32.475108   25147 main.go:141] libmachine: Getting to WaitForSSH function...
	I1127 23:54:32.475119   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:32.477303   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.477621   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:32.477651   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.477832   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:32.478001   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:32.478126   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:32.478240   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:32.478367   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:54:32.478706   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1127 23:54:32.478720   25147 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1127 23:54:32.607833   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:54:32.607856   25147 main.go:141] libmachine: Detecting the provisioner...
	I1127 23:54:32.607864   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:32.610619   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.610959   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:32.610981   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.611125   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:32.611324   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:32.611464   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:32.611604   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:32.611745   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:54:32.612048   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1127 23:54:32.612060   25147 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1127 23:54:32.741229   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g8be4f20-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1127 23:54:32.741309   25147 main.go:141] libmachine: found compatible host: buildroot
	I1127 23:54:32.741323   25147 main.go:141] libmachine: Provisioning with buildroot...
	I1127 23:54:32.741335   25147 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1127 23:54:32.741596   25147 buildroot.go:166] provisioning hostname "multinode-883509"
	I1127 23:54:32.741660   25147 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1127 23:54:32.741835   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:32.744280   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.744702   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:32.744737   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.744768   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:32.744916   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:32.745088   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:32.745219   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:32.745370   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:54:32.745730   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1127 23:54:32.745744   25147 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-883509 && echo "multinode-883509" | sudo tee /etc/hostname
	I1127 23:54:32.889977   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-883509
	
	I1127 23:54:32.889999   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:32.892483   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.892851   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:32.892879   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:32.893052   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:32.893210   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:32.893330   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:32.893434   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:32.893640   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:54:32.894009   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1127 23:54:32.894027   25147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-883509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-883509/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-883509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:54:33.038625   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:54:33.038654   25147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1127 23:54:33.038701   25147 buildroot.go:174] setting up certificates
	I1127 23:54:33.038717   25147 provision.go:83] configureAuth start
	I1127 23:54:33.038729   25147 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1127 23:54:33.038999   25147 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1127 23:54:33.041441   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.041773   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.041793   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.041960   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:33.043949   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.044269   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.044296   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.044416   25147 provision.go:138] copyHostCerts
	I1127 23:54:33.044438   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1127 23:54:33.044472   25147 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1127 23:54:33.044484   25147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1127 23:54:33.044554   25147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1127 23:54:33.044688   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1127 23:54:33.044716   25147 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1127 23:54:33.044727   25147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1127 23:54:33.044770   25147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1127 23:54:33.044832   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1127 23:54:33.044854   25147 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1127 23:54:33.044860   25147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1127 23:54:33.044886   25147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1127 23:54:33.044946   25147 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.multinode-883509 san=[192.168.39.159 192.168.39.159 localhost 127.0.0.1 minikube multinode-883509]
	I1127 23:54:33.124582   25147 provision.go:172] copyRemoteCerts
	I1127 23:54:33.124642   25147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:54:33.124662   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:33.127201   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.127481   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.127501   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.127680   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:33.127855   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:33.127982   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:33.128078   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1127 23:54:33.221275   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:54:33.221334   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:54:33.243800   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:54:33.243878   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1127 23:54:33.266129   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:54:33.266199   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1127 23:54:33.286679   25147 provision.go:86] duration metric: configureAuth took 247.94721ms
	I1127 23:54:33.286707   25147 buildroot.go:189] setting minikube options for container-runtime
	I1127 23:54:33.286967   25147 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:54:33.287049   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:33.289279   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.289570   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.289600   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.289732   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:33.289912   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:33.290048   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:33.290159   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:33.290320   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:54:33.290625   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1127 23:54:33.290641   25147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:54:33.613875   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:54:33.613900   25147 main.go:141] libmachine: Checking connection to Docker...
	I1127 23:54:33.613912   25147 main.go:141] libmachine: (multinode-883509) Calling .GetURL
	I1127 23:54:33.615140   25147 main.go:141] libmachine: (multinode-883509) DBG | Using libvirt version 6000000
	I1127 23:54:33.617395   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.617695   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.617732   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.617891   25147 main.go:141] libmachine: Docker is up and running!
	I1127 23:54:33.617911   25147 main.go:141] libmachine: Reticulating splines...
	I1127 23:54:33.617918   25147 client.go:171] LocalClient.Create took 24.586084394s
	I1127 23:54:33.617942   25147 start.go:167] duration metric: libmachine.API.Create for "multinode-883509" took 24.586146217s
	I1127 23:54:33.617952   25147 start.go:300] post-start starting for "multinode-883509" (driver="kvm2")
	I1127 23:54:33.617960   25147 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:54:33.617975   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:54:33.618176   25147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:54:33.618198   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:33.620211   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.620537   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.620561   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.620699   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:33.620876   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:33.621045   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:33.621145   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1127 23:54:33.714245   25147 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:54:33.718002   25147 command_runner.go:130] > NAME=Buildroot
	I1127 23:54:33.718016   25147 command_runner.go:130] > VERSION=2021.02.12-1-g8be4f20-dirty
	I1127 23:54:33.718020   25147 command_runner.go:130] > ID=buildroot
	I1127 23:54:33.718025   25147 command_runner.go:130] > VERSION_ID=2021.02.12
	I1127 23:54:33.718030   25147 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1127 23:54:33.718257   25147 info.go:137] Remote host: Buildroot 2021.02.12
	I1127 23:54:33.718279   25147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1127 23:54:33.718347   25147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1127 23:54:33.718449   25147 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1127 23:54:33.718463   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /etc/ssl/certs/119302.pem
	I1127 23:54:33.718561   25147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:54:33.727016   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1127 23:54:33.748160   25147 start.go:303] post-start completed in 130.198141ms
	I1127 23:54:33.748206   25147 main.go:141] libmachine: (multinode-883509) Calling .GetConfigRaw
	I1127 23:54:33.748786   25147 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1127 23:54:33.751221   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.751532   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.751563   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.751819   25147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1127 23:54:33.752020   25147 start.go:128] duration metric: createHost completed in 24.73793429s
	I1127 23:54:33.752048   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:33.754185   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.754532   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.754552   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.754694   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:33.754860   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:33.755006   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:33.755157   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:33.755280   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:54:33.755569   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1127 23:54:33.755580   25147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1127 23:54:33.885311   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701129273.868301073
	
	I1127 23:54:33.885342   25147 fix.go:206] guest clock: 1701129273.868301073
	I1127 23:54:33.885355   25147 fix.go:219] Guest: 2023-11-27 23:54:33.868301073 +0000 UTC Remote: 2023-11-27 23:54:33.752033186 +0000 UTC m=+24.861882891 (delta=116.267887ms)
	I1127 23:54:33.885397   25147 fix.go:190] guest clock delta is within tolerance: 116.267887ms
	I1127 23:54:33.885409   25147 start.go:83] releasing machines lock for "multinode-883509", held for 24.871406716s
	I1127 23:54:33.885441   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:54:33.885708   25147 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1127 23:54:33.888202   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.888547   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.888589   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.888729   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:54:33.889190   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:54:33.889346   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:54:33.889420   25147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:54:33.889459   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:33.889531   25147 ssh_runner.go:195] Run: cat /version.json
	I1127 23:54:33.889557   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:54:33.892137   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.892320   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.892428   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.892465   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.892603   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:33.892694   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:33.892725   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:33.892779   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:33.892925   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:54:33.892948   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:33.893097   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:54:33.893108   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1127 23:54:33.893233   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:54:33.893330   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1127 23:54:33.981019   25147 command_runner.go:130] > {"iso_version": "v1.32.1-1701107474-17206", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "bcc467dd5a1a124d966bcc72a040bb167e304544"}
	I1127 23:54:33.981256   25147 ssh_runner.go:195] Run: systemctl --version
	I1127 23:54:34.003294   25147 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1127 23:54:34.003347   25147 command_runner.go:130] > systemd 247 (247)
	I1127 23:54:34.003364   25147 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1127 23:54:34.003424   25147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:54:34.155359   25147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:54:34.161645   25147 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1127 23:54:34.162242   25147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1127 23:54:34.162311   25147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:54:34.176315   25147 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1127 23:54:34.176372   25147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 23:54:34.176386   25147 start.go:472] detecting cgroup driver to use...
	I1127 23:54:34.176448   25147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:54:34.188832   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:54:34.200215   25147 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:54:34.200255   25147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:54:34.211917   25147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:54:34.223556   25147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:54:34.236106   25147 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1127 23:54:34.320854   25147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:54:34.334460   25147 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1127 23:54:34.444077   25147 docker.go:219] disabling docker service ...
	I1127 23:54:34.444159   25147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:54:34.457791   25147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:54:34.468645   25147 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1127 23:54:34.468836   25147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:54:34.481782   25147 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1127 23:54:34.579066   25147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:54:34.590861   25147 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1127 23:54:34.591197   25147 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1127 23:54:34.685568   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:54:34.697467   25147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:54:34.715253   25147 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1127 23:54:34.715308   25147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:54:34.715352   25147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:54:34.724060   25147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:54:34.724120   25147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:54:34.732765   25147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:54:34.741380   25147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:54:34.749768   25147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:54:34.758541   25147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:54:34.766126   25147 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1127 23:54:34.766157   25147 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1127 23:54:34.766195   25147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1127 23:54:34.777445   25147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:54:34.786043   25147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:54:34.895505   25147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:54:35.061458   25147 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:54:35.061543   25147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:54:35.066467   25147 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1127 23:54:35.066493   25147 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1127 23:54:35.066510   25147 command_runner.go:130] > Device: 16h/22d	Inode: 796         Links: 1
	I1127 23:54:35.066517   25147 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:54:35.066522   25147 command_runner.go:130] > Access: 2023-11-27 23:54:35.033146075 +0000
	I1127 23:54:35.066528   25147 command_runner.go:130] > Modify: 2023-11-27 23:54:35.033146075 +0000
	I1127 23:54:35.066533   25147 command_runner.go:130] > Change: 2023-11-27 23:54:35.033146075 +0000
	I1127 23:54:35.066537   25147 command_runner.go:130] >  Birth: -
	I1127 23:54:35.066550   25147 start.go:540] Will wait 60s for crictl version
	I1127 23:54:35.066584   25147 ssh_runner.go:195] Run: which crictl
	I1127 23:54:35.070347   25147 command_runner.go:130] > /usr/bin/crictl
	I1127 23:54:35.070657   25147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:54:35.110102   25147 command_runner.go:130] > Version:  0.1.0
	I1127 23:54:35.110131   25147 command_runner.go:130] > RuntimeName:  cri-o
	I1127 23:54:35.110139   25147 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1127 23:54:35.110146   25147 command_runner.go:130] > RuntimeApiVersion:  v1
	I1127 23:54:35.111459   25147 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1127 23:54:35.111545   25147 ssh_runner.go:195] Run: crio --version
	I1127 23:54:35.154487   25147 command_runner.go:130] > crio version 1.24.1
	I1127 23:54:35.154514   25147 command_runner.go:130] > Version:          1.24.1
	I1127 23:54:35.154530   25147 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1127 23:54:35.154538   25147 command_runner.go:130] > GitTreeState:     dirty
	I1127 23:54:35.154547   25147 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1127 23:54:35.154556   25147 command_runner.go:130] > GoVersion:        go1.19.9
	I1127 23:54:35.154563   25147 command_runner.go:130] > Compiler:         gc
	I1127 23:54:35.154576   25147 command_runner.go:130] > Platform:         linux/amd64
	I1127 23:54:35.154584   25147 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:54:35.154596   25147 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:54:35.154615   25147 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:54:35.154625   25147 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:54:35.155756   25147 ssh_runner.go:195] Run: crio --version
	I1127 23:54:35.203848   25147 command_runner.go:130] > crio version 1.24.1
	I1127 23:54:35.203872   25147 command_runner.go:130] > Version:          1.24.1
	I1127 23:54:35.203887   25147 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1127 23:54:35.203892   25147 command_runner.go:130] > GitTreeState:     dirty
	I1127 23:54:35.203899   25147 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1127 23:54:35.203906   25147 command_runner.go:130] > GoVersion:        go1.19.9
	I1127 23:54:35.203913   25147 command_runner.go:130] > Compiler:         gc
	I1127 23:54:35.203921   25147 command_runner.go:130] > Platform:         linux/amd64
	I1127 23:54:35.203930   25147 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:54:35.203942   25147 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:54:35.203949   25147 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:54:35.203954   25147 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:54:35.206984   25147 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1127 23:54:35.208376   25147 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1127 23:54:35.210831   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:35.211274   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:54:35.211301   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:54:35.211499   25147 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1127 23:54:35.215347   25147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:54:35.226729   25147 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:54:35.226777   25147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:54:35.261529   25147 command_runner.go:130] > {
	I1127 23:54:35.261559   25147 command_runner.go:130] >   "images": [
	I1127 23:54:35.261566   25147 command_runner.go:130] >   ]
	I1127 23:54:35.261577   25147 command_runner.go:130] > }
	I1127 23:54:35.262570   25147 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1127 23:54:35.262641   25147 ssh_runner.go:195] Run: which lz4
	I1127 23:54:35.266441   25147 command_runner.go:130] > /usr/bin/lz4
	I1127 23:54:35.266466   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1127 23:54:35.266526   25147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1127 23:54:35.270328   25147 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 23:54:35.270606   25147 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 23:54:35.270626   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1127 23:54:37.021964   25147 crio.go:444] Took 1.755456 seconds to copy over tarball
	I1127 23:54:37.022027   25147 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1127 23:54:39.930905   25147 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.908849848s)
	I1127 23:54:39.930931   25147 crio.go:451] Took 2.908940 seconds to extract the tarball
	I1127 23:54:39.930941   25147 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1127 23:54:39.970693   25147 ssh_runner.go:195] Run: sudo crictl images --output json
	I1127 23:54:40.035706   25147 command_runner.go:130] > {
	I1127 23:54:40.035730   25147 command_runner.go:130] >   "images": [
	I1127 23:54:40.035735   25147 command_runner.go:130] >     {
	I1127 23:54:40.035743   25147 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1127 23:54:40.035748   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.035754   25147 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1127 23:54:40.035757   25147 command_runner.go:130] >       ],
	I1127 23:54:40.035764   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.035778   25147 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1127 23:54:40.035795   25147 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1127 23:54:40.035802   25147 command_runner.go:130] >       ],
	I1127 23:54:40.035809   25147 command_runner.go:130] >       "size": "65258016",
	I1127 23:54:40.035816   25147 command_runner.go:130] >       "uid": null,
	I1127 23:54:40.035823   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.035829   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.035834   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.035841   25147 command_runner.go:130] >     },
	I1127 23:54:40.035844   25147 command_runner.go:130] >     {
	I1127 23:54:40.035854   25147 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1127 23:54:40.035860   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.035866   25147 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1127 23:54:40.035870   25147 command_runner.go:130] >       ],
	I1127 23:54:40.035876   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.035883   25147 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1127 23:54:40.035893   25147 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1127 23:54:40.035897   25147 command_runner.go:130] >       ],
	I1127 23:54:40.035902   25147 command_runner.go:130] >       "size": "31470524",
	I1127 23:54:40.035907   25147 command_runner.go:130] >       "uid": null,
	I1127 23:54:40.035912   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.035918   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.035922   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.035928   25147 command_runner.go:130] >     },
	I1127 23:54:40.035932   25147 command_runner.go:130] >     {
	I1127 23:54:40.035940   25147 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1127 23:54:40.035945   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.035950   25147 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1127 23:54:40.035957   25147 command_runner.go:130] >       ],
	I1127 23:54:40.035975   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.035985   25147 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1127 23:54:40.035993   25147 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1127 23:54:40.035999   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036004   25147 command_runner.go:130] >       "size": "53621675",
	I1127 23:54:40.036010   25147 command_runner.go:130] >       "uid": null,
	I1127 23:54:40.036015   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.036019   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.036025   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.036028   25147 command_runner.go:130] >     },
	I1127 23:54:40.036034   25147 command_runner.go:130] >     {
	I1127 23:54:40.036041   25147 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1127 23:54:40.036047   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.036052   25147 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1127 23:54:40.036058   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036062   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.036069   25147 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1127 23:54:40.036080   25147 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1127 23:54:40.036090   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036095   25147 command_runner.go:130] >       "size": "295456551",
	I1127 23:54:40.036101   25147 command_runner.go:130] >       "uid": {
	I1127 23:54:40.036105   25147 command_runner.go:130] >         "value": "0"
	I1127 23:54:40.036108   25147 command_runner.go:130] >       },
	I1127 23:54:40.036112   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.036119   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.036123   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.036129   25147 command_runner.go:130] >     },
	I1127 23:54:40.036132   25147 command_runner.go:130] >     {
	I1127 23:54:40.036138   25147 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1127 23:54:40.036145   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.036150   25147 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1127 23:54:40.036154   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036158   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.036166   25147 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1127 23:54:40.036176   25147 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1127 23:54:40.036180   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036185   25147 command_runner.go:130] >       "size": "127226832",
	I1127 23:54:40.036191   25147 command_runner.go:130] >       "uid": {
	I1127 23:54:40.036195   25147 command_runner.go:130] >         "value": "0"
	I1127 23:54:40.036198   25147 command_runner.go:130] >       },
	I1127 23:54:40.036202   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.036207   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.036213   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.036216   25147 command_runner.go:130] >     },
	I1127 23:54:40.036220   25147 command_runner.go:130] >     {
	I1127 23:54:40.036226   25147 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1127 23:54:40.036232   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.036238   25147 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1127 23:54:40.036242   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036246   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.036256   25147 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1127 23:54:40.036264   25147 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1127 23:54:40.036269   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036274   25147 command_runner.go:130] >       "size": "123261750",
	I1127 23:54:40.036278   25147 command_runner.go:130] >       "uid": {
	I1127 23:54:40.036282   25147 command_runner.go:130] >         "value": "0"
	I1127 23:54:40.036290   25147 command_runner.go:130] >       },
	I1127 23:54:40.036294   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.036299   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.036303   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.036307   25147 command_runner.go:130] >     },
	I1127 23:54:40.036310   25147 command_runner.go:130] >     {
	I1127 23:54:40.036319   25147 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1127 23:54:40.036323   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.036328   25147 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1127 23:54:40.036334   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036338   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.036345   25147 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1127 23:54:40.036354   25147 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1127 23:54:40.036358   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036362   25147 command_runner.go:130] >       "size": "74749335",
	I1127 23:54:40.036367   25147 command_runner.go:130] >       "uid": null,
	I1127 23:54:40.036372   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.036377   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.036381   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.036385   25147 command_runner.go:130] >     },
	I1127 23:54:40.036391   25147 command_runner.go:130] >     {
	I1127 23:54:40.036396   25147 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1127 23:54:40.036401   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.036406   25147 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1127 23:54:40.036412   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036416   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.036468   25147 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1127 23:54:40.036483   25147 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1127 23:54:40.036487   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036491   25147 command_runner.go:130] >       "size": "61551410",
	I1127 23:54:40.036495   25147 command_runner.go:130] >       "uid": {
	I1127 23:54:40.036500   25147 command_runner.go:130] >         "value": "0"
	I1127 23:54:40.036506   25147 command_runner.go:130] >       },
	I1127 23:54:40.036510   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.036514   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.036521   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.036524   25147 command_runner.go:130] >     },
	I1127 23:54:40.036528   25147 command_runner.go:130] >     {
	I1127 23:54:40.036534   25147 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1127 23:54:40.036540   25147 command_runner.go:130] >       "repoTags": [
	I1127 23:54:40.036545   25147 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1127 23:54:40.036549   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036554   25147 command_runner.go:130] >       "repoDigests": [
	I1127 23:54:40.036561   25147 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1127 23:54:40.036570   25147 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1127 23:54:40.036575   25147 command_runner.go:130] >       ],
	I1127 23:54:40.036581   25147 command_runner.go:130] >       "size": "750414",
	I1127 23:54:40.036585   25147 command_runner.go:130] >       "uid": {
	I1127 23:54:40.036591   25147 command_runner.go:130] >         "value": "65535"
	I1127 23:54:40.036595   25147 command_runner.go:130] >       },
	I1127 23:54:40.036599   25147 command_runner.go:130] >       "username": "",
	I1127 23:54:40.036606   25147 command_runner.go:130] >       "spec": null,
	I1127 23:54:40.036610   25147 command_runner.go:130] >       "pinned": false
	I1127 23:54:40.036613   25147 command_runner.go:130] >     }
	I1127 23:54:40.036616   25147 command_runner.go:130] >   ]
	I1127 23:54:40.036620   25147 command_runner.go:130] > }
	I1127 23:54:40.037162   25147 crio.go:496] all images are preloaded for cri-o runtime.
	I1127 23:54:40.037179   25147 cache_images.go:84] Images are preloaded, skipping loading
	I1127 23:54:40.037236   25147 ssh_runner.go:195] Run: crio config
	I1127 23:54:40.082835   25147 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1127 23:54:40.082864   25147 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1127 23:54:40.082871   25147 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1127 23:54:40.082876   25147 command_runner.go:130] > #
	I1127 23:54:40.082888   25147 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1127 23:54:40.082897   25147 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1127 23:54:40.082908   25147 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1127 23:54:40.082924   25147 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1127 23:54:40.082932   25147 command_runner.go:130] > # reload'.
	I1127 23:54:40.082945   25147 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1127 23:54:40.082959   25147 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1127 23:54:40.082990   25147 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1127 23:54:40.083000   25147 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1127 23:54:40.083006   25147 command_runner.go:130] > [crio]
	I1127 23:54:40.083015   25147 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1127 23:54:40.083024   25147 command_runner.go:130] > # containers images, in this directory.
	I1127 23:54:40.083032   25147 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1127 23:54:40.083051   25147 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1127 23:54:40.083065   25147 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1127 23:54:40.083078   25147 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1127 23:54:40.083089   25147 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1127 23:54:40.083099   25147 command_runner.go:130] > storage_driver = "overlay"
	I1127 23:54:40.083109   25147 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1127 23:54:40.083123   25147 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1127 23:54:40.083134   25147 command_runner.go:130] > storage_option = [
	I1127 23:54:40.083171   25147 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1127 23:54:40.083182   25147 command_runner.go:130] > ]
	I1127 23:54:40.083194   25147 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1127 23:54:40.083208   25147 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1127 23:54:40.083220   25147 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1127 23:54:40.083233   25147 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1127 23:54:40.083247   25147 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1127 23:54:40.083255   25147 command_runner.go:130] > # always happen on a node reboot
	I1127 23:54:40.083265   25147 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1127 23:54:40.083271   25147 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1127 23:54:40.083278   25147 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1127 23:54:40.083288   25147 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1127 23:54:40.083300   25147 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1127 23:54:40.083316   25147 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1127 23:54:40.083332   25147 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1127 23:54:40.083342   25147 command_runner.go:130] > # internal_wipe = true
	I1127 23:54:40.083351   25147 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1127 23:54:40.083363   25147 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1127 23:54:40.083369   25147 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1127 23:54:40.083380   25147 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1127 23:54:40.083395   25147 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1127 23:54:40.083405   25147 command_runner.go:130] > [crio.api]
	I1127 23:54:40.083414   25147 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1127 23:54:40.083424   25147 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1127 23:54:40.083436   25147 command_runner.go:130] > # IP address on which the stream server will listen.
	I1127 23:54:40.083444   25147 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1127 23:54:40.083454   25147 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1127 23:54:40.083461   25147 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1127 23:54:40.083471   25147 command_runner.go:130] > # stream_port = "0"
	I1127 23:54:40.083480   25147 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1127 23:54:40.083491   25147 command_runner.go:130] > # stream_enable_tls = false
	I1127 23:54:40.083501   25147 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1127 23:54:40.083513   25147 command_runner.go:130] > # stream_idle_timeout = ""
	I1127 23:54:40.083527   25147 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1127 23:54:40.083540   25147 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1127 23:54:40.083546   25147 command_runner.go:130] > # minutes.
	I1127 23:54:40.083557   25147 command_runner.go:130] > # stream_tls_cert = ""
	I1127 23:54:40.083571   25147 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1127 23:54:40.083586   25147 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1127 23:54:40.083612   25147 command_runner.go:130] > # stream_tls_key = ""
	I1127 23:54:40.083625   25147 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1127 23:54:40.083639   25147 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1127 23:54:40.083650   25147 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1127 23:54:40.083660   25147 command_runner.go:130] > # stream_tls_ca = ""
	I1127 23:54:40.083672   25147 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:54:40.083683   25147 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1127 23:54:40.083694   25147 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:54:40.083705   25147 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1127 23:54:40.083724   25147 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1127 23:54:40.083737   25147 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1127 23:54:40.083744   25147 command_runner.go:130] > [crio.runtime]
	I1127 23:54:40.083757   25147 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1127 23:54:40.083771   25147 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1127 23:54:40.083781   25147 command_runner.go:130] > # "nofile=1024:2048"
	I1127 23:54:40.083791   25147 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1127 23:54:40.083801   25147 command_runner.go:130] > # default_ulimits = [
	I1127 23:54:40.083807   25147 command_runner.go:130] > # ]
	I1127 23:54:40.083820   25147 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1127 23:54:40.083828   25147 command_runner.go:130] > # no_pivot = false
	I1127 23:54:40.083841   25147 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1127 23:54:40.083851   25147 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1127 23:54:40.083863   25147 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1127 23:54:40.083877   25147 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1127 23:54:40.083888   25147 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1127 23:54:40.083903   25147 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:54:40.083914   25147 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1127 23:54:40.083924   25147 command_runner.go:130] > # Cgroup setting for conmon
	I1127 23:54:40.083940   25147 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1127 23:54:40.083948   25147 command_runner.go:130] > conmon_cgroup = "pod"
	I1127 23:54:40.083970   25147 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1127 23:54:40.083982   25147 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1127 23:54:40.083994   25147 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:54:40.084010   25147 command_runner.go:130] > conmon_env = [
	I1127 23:54:40.084020   25147 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1127 23:54:40.084026   25147 command_runner.go:130] > ]
	I1127 23:54:40.084082   25147 command_runner.go:130] > # Additional environment variables to set for all the
	I1127 23:54:40.084102   25147 command_runner.go:130] > # containers. These are overridden if set in the
	I1127 23:54:40.084113   25147 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1127 23:54:40.084124   25147 command_runner.go:130] > # default_env = [
	I1127 23:54:40.084130   25147 command_runner.go:130] > # ]
	I1127 23:54:40.084142   25147 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1127 23:54:40.084185   25147 command_runner.go:130] > # selinux = false
	I1127 23:54:40.084194   25147 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1127 23:54:40.084200   25147 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1127 23:54:40.084209   25147 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1127 23:54:40.084214   25147 command_runner.go:130] > # seccomp_profile = ""
	I1127 23:54:40.084219   25147 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1127 23:54:40.084230   25147 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1127 23:54:40.084243   25147 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1127 23:54:40.084255   25147 command_runner.go:130] > # which might increase security.
	I1127 23:54:40.084263   25147 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1127 23:54:40.084277   25147 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1127 23:54:40.084290   25147 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1127 23:54:40.084303   25147 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1127 23:54:40.084313   25147 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1127 23:54:40.084318   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:40.084323   25147 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1127 23:54:40.084329   25147 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1127 23:54:40.084336   25147 command_runner.go:130] > # the cgroup blockio controller.
	I1127 23:54:40.084343   25147 command_runner.go:130] > # blockio_config_file = ""
	I1127 23:54:40.084356   25147 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1127 23:54:40.084362   25147 command_runner.go:130] > # irqbalance daemon.
	I1127 23:54:40.084373   25147 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1127 23:54:40.084384   25147 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1127 23:54:40.084396   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:40.084404   25147 command_runner.go:130] > # rdt_config_file = ""
	I1127 23:54:40.084416   25147 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1127 23:54:40.084427   25147 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1127 23:54:40.084437   25147 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1127 23:54:40.084448   25147 command_runner.go:130] > # separate_pull_cgroup = ""
	I1127 23:54:40.084464   25147 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1127 23:54:40.084477   25147 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1127 23:54:40.084487   25147 command_runner.go:130] > # will be added.
	I1127 23:54:40.084495   25147 command_runner.go:130] > # default_capabilities = [
	I1127 23:54:40.084505   25147 command_runner.go:130] > # 	"CHOWN",
	I1127 23:54:40.084513   25147 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1127 23:54:40.084522   25147 command_runner.go:130] > # 	"FSETID",
	I1127 23:54:40.084529   25147 command_runner.go:130] > # 	"FOWNER",
	I1127 23:54:40.084538   25147 command_runner.go:130] > # 	"SETGID",
	I1127 23:54:40.084544   25147 command_runner.go:130] > # 	"SETUID",
	I1127 23:54:40.084552   25147 command_runner.go:130] > # 	"SETPCAP",
	I1127 23:54:40.084559   25147 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1127 23:54:40.084569   25147 command_runner.go:130] > # 	"KILL",
	I1127 23:54:40.084576   25147 command_runner.go:130] > # ]
	I1127 23:54:40.084590   25147 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1127 23:54:40.084603   25147 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:54:40.084613   25147 command_runner.go:130] > # default_sysctls = [
	I1127 23:54:40.084619   25147 command_runner.go:130] > # ]
	I1127 23:54:40.084630   25147 command_runner.go:130] > # List of devices on the host that a
	I1127 23:54:40.084641   25147 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1127 23:54:40.084671   25147 command_runner.go:130] > # allowed_devices = [
	I1127 23:54:40.084682   25147 command_runner.go:130] > # 	"/dev/fuse",
	I1127 23:54:40.084689   25147 command_runner.go:130] > # ]
	I1127 23:54:40.084700   25147 command_runner.go:130] > # List of additional devices. specified as
	I1127 23:54:40.084713   25147 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1127 23:54:40.084726   25147 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1127 23:54:40.084773   25147 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:54:40.084794   25147 command_runner.go:130] > # additional_devices = [
	I1127 23:54:40.084800   25147 command_runner.go:130] > # ]
	I1127 23:54:40.084808   25147 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1127 23:54:40.084815   25147 command_runner.go:130] > # cdi_spec_dirs = [
	I1127 23:54:40.084835   25147 command_runner.go:130] > # 	"/etc/cdi",
	I1127 23:54:40.084843   25147 command_runner.go:130] > # 	"/var/run/cdi",
	I1127 23:54:40.084853   25147 command_runner.go:130] > # ]
	I1127 23:54:40.084864   25147 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1127 23:54:40.084878   25147 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1127 23:54:40.084889   25147 command_runner.go:130] > # Defaults to false.
	I1127 23:54:40.084941   25147 command_runner.go:130] > # device_ownership_from_security_context = false
	I1127 23:54:40.084959   25147 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1127 23:54:40.084969   25147 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1127 23:54:40.084976   25147 command_runner.go:130] > # hooks_dir = [
	I1127 23:54:40.084985   25147 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1127 23:54:40.084993   25147 command_runner.go:130] > # ]
	I1127 23:54:40.085000   25147 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1127 23:54:40.085012   25147 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1127 23:54:40.085025   25147 command_runner.go:130] > # its default mounts from the following two files:
	I1127 23:54:40.085034   25147 command_runner.go:130] > #
	I1127 23:54:40.085045   25147 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1127 23:54:40.085059   25147 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1127 23:54:40.085072   25147 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1127 23:54:40.085080   25147 command_runner.go:130] > #
	I1127 23:54:40.085089   25147 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1127 23:54:40.085099   25147 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1127 23:54:40.085114   25147 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1127 23:54:40.085125   25147 command_runner.go:130] > #      only add mounts it finds in this file.
	I1127 23:54:40.085134   25147 command_runner.go:130] > #
	I1127 23:54:40.085251   25147 command_runner.go:130] > # default_mounts_file = ""
	I1127 23:54:40.085271   25147 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1127 23:54:40.085283   25147 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1127 23:54:40.085290   25147 command_runner.go:130] > pids_limit = 1024
	I1127 23:54:40.085303   25147 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1127 23:54:40.085317   25147 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1127 23:54:40.085330   25147 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1127 23:54:40.085348   25147 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1127 23:54:40.085357   25147 command_runner.go:130] > # log_size_max = -1
	I1127 23:54:40.085368   25147 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1127 23:54:40.085378   25147 command_runner.go:130] > # log_to_journald = false
	I1127 23:54:40.085388   25147 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1127 23:54:40.085421   25147 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1127 23:54:40.085433   25147 command_runner.go:130] > # Path to directory for container attach sockets.
	I1127 23:54:40.085442   25147 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1127 23:54:40.085454   25147 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1127 23:54:40.085470   25147 command_runner.go:130] > # bind_mount_prefix = ""
	I1127 23:54:40.085482   25147 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1127 23:54:40.085491   25147 command_runner.go:130] > # read_only = false
	I1127 23:54:40.085501   25147 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1127 23:54:40.085514   25147 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1127 23:54:40.085524   25147 command_runner.go:130] > # live configuration reload.
	I1127 23:54:40.085532   25147 command_runner.go:130] > # log_level = "info"
	I1127 23:54:40.085543   25147 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1127 23:54:40.085554   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:40.085582   25147 command_runner.go:130] > # log_filter = ""
	I1127 23:54:40.085596   25147 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1127 23:54:40.085615   25147 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1127 23:54:40.085625   25147 command_runner.go:130] > # separated by comma.
	I1127 23:54:40.085632   25147 command_runner.go:130] > # uid_mappings = ""
	I1127 23:54:40.085643   25147 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1127 23:54:40.085657   25147 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1127 23:54:40.085667   25147 command_runner.go:130] > # separated by comma.
	I1127 23:54:40.085695   25147 command_runner.go:130] > # gid_mappings = ""
	I1127 23:54:40.085709   25147 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1127 23:54:40.085719   25147 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:54:40.085734   25147 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:54:40.085744   25147 command_runner.go:130] > # minimum_mappable_uid = -1
	I1127 23:54:40.085757   25147 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1127 23:54:40.085770   25147 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:54:40.085782   25147 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:54:40.085790   25147 command_runner.go:130] > # minimum_mappable_gid = -1
	I1127 23:54:40.085798   25147 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1127 23:54:40.085810   25147 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1127 23:54:40.085820   25147 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1127 23:54:40.085830   25147 command_runner.go:130] > # ctr_stop_timeout = 30
	I1127 23:54:40.085839   25147 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1127 23:54:40.085851   25147 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1127 23:54:40.085862   25147 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1127 23:54:40.085872   25147 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1127 23:54:40.085883   25147 command_runner.go:130] > drop_infra_ctr = false
	I1127 23:54:40.085898   25147 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1127 23:54:40.085909   25147 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1127 23:54:40.085921   25147 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1127 23:54:40.085930   25147 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1127 23:54:40.085940   25147 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1127 23:54:40.085951   25147 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1127 23:54:40.085981   25147 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1127 23:54:40.085995   25147 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1127 23:54:40.086005   25147 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1127 23:54:40.086018   25147 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1127 23:54:40.086029   25147 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1127 23:54:40.086042   25147 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1127 23:54:40.086052   25147 command_runner.go:130] > # default_runtime = "runc"
	I1127 23:54:40.086060   25147 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1127 23:54:40.086075   25147 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1127 23:54:40.086093   25147 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1127 23:54:40.086104   25147 command_runner.go:130] > # creation as a file is not desired either.
	I1127 23:54:40.086120   25147 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1127 23:54:40.086131   25147 command_runner.go:130] > # the hostname is being managed dynamically.
	I1127 23:54:40.086142   25147 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1127 23:54:40.086149   25147 command_runner.go:130] > # ]
	I1127 23:54:40.086162   25147 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1127 23:54:40.086176   25147 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1127 23:54:40.086190   25147 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1127 23:54:40.086203   25147 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1127 23:54:40.086212   25147 command_runner.go:130] > #
	I1127 23:54:40.086220   25147 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1127 23:54:40.086231   25147 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1127 23:54:40.086242   25147 command_runner.go:130] > #  runtime_type = "oci"
	I1127 23:54:40.086250   25147 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1127 23:54:40.086262   25147 command_runner.go:130] > #  privileged_without_host_devices = false
	I1127 23:54:40.086272   25147 command_runner.go:130] > #  allowed_annotations = []
	I1127 23:54:40.086278   25147 command_runner.go:130] > # Where:
	I1127 23:54:40.086289   25147 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1127 23:54:40.086303   25147 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1127 23:54:40.086313   25147 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1127 23:54:40.086326   25147 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1127 23:54:40.086337   25147 command_runner.go:130] > #   in $PATH.
	I1127 23:54:40.086351   25147 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1127 23:54:40.086362   25147 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1127 23:54:40.086372   25147 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1127 23:54:40.086382   25147 command_runner.go:130] > #   state.
	I1127 23:54:40.086392   25147 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1127 23:54:40.086406   25147 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1127 23:54:40.086419   25147 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1127 23:54:40.086431   25147 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1127 23:54:40.086441   25147 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1127 23:54:40.086455   25147 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1127 23:54:40.086466   25147 command_runner.go:130] > #   The currently recognized values are:
	I1127 23:54:40.086477   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1127 23:54:40.086491   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1127 23:54:40.086503   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1127 23:54:40.086516   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1127 23:54:40.086531   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1127 23:54:40.086544   25147 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1127 23:54:40.086557   25147 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1127 23:54:40.086571   25147 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1127 23:54:40.086582   25147 command_runner.go:130] > #   should be moved to the container's cgroup
	I1127 23:54:40.086592   25147 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1127 23:54:40.086605   25147 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1127 23:54:40.086616   25147 command_runner.go:130] > runtime_type = "oci"
	I1127 23:54:40.086623   25147 command_runner.go:130] > runtime_root = "/run/runc"
	I1127 23:54:40.086633   25147 command_runner.go:130] > runtime_config_path = ""
	I1127 23:54:40.086639   25147 command_runner.go:130] > monitor_path = ""
	I1127 23:54:40.086649   25147 command_runner.go:130] > monitor_cgroup = ""
	I1127 23:54:40.086657   25147 command_runner.go:130] > monitor_exec_cgroup = ""
	I1127 23:54:40.086674   25147 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1127 23:54:40.086684   25147 command_runner.go:130] > # running containers
	I1127 23:54:40.086691   25147 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1127 23:54:40.086705   25147 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1127 23:54:40.086738   25147 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1127 23:54:40.086753   25147 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1127 23:54:40.086761   25147 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1127 23:54:40.086770   25147 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1127 23:54:40.086778   25147 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1127 23:54:40.086786   25147 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1127 23:54:40.086794   25147 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1127 23:54:40.086802   25147 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1127 23:54:40.086840   25147 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1127 23:54:40.086853   25147 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1127 23:54:40.086863   25147 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1127 23:54:40.086875   25147 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1127 23:54:40.086884   25147 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1127 23:54:40.086892   25147 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1127 23:54:40.086906   25147 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1127 23:54:40.086920   25147 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1127 23:54:40.086933   25147 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1127 23:54:40.086947   25147 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1127 23:54:40.086956   25147 command_runner.go:130] > # Example:
	I1127 23:54:40.086963   25147 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1127 23:54:40.086974   25147 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1127 23:54:40.086990   25147 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1127 23:54:40.087001   25147 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1127 23:54:40.087007   25147 command_runner.go:130] > # cpuset = 0
	I1127 23:54:40.087017   25147 command_runner.go:130] > # cpushares = "0-1"
	I1127 23:54:40.087024   25147 command_runner.go:130] > # Where:
	I1127 23:54:40.087035   25147 command_runner.go:130] > # The workload name is workload-type.
	I1127 23:54:40.087048   25147 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1127 23:54:40.087060   25147 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1127 23:54:40.087073   25147 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1127 23:54:40.087085   25147 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1127 23:54:40.087096   25147 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1127 23:54:40.087101   25147 command_runner.go:130] > # 
	I1127 23:54:40.087114   25147 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1127 23:54:40.087119   25147 command_runner.go:130] > #
	I1127 23:54:40.087129   25147 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1127 23:54:40.087142   25147 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1127 23:54:40.087155   25147 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1127 23:54:40.087168   25147 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1127 23:54:40.087179   25147 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1127 23:54:40.087188   25147 command_runner.go:130] > [crio.image]
	I1127 23:54:40.087198   25147 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1127 23:54:40.087208   25147 command_runner.go:130] > # default_transport = "docker://"
	I1127 23:54:40.087218   25147 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1127 23:54:40.087230   25147 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:54:40.087243   25147 command_runner.go:130] > # global_auth_file = ""
	I1127 23:54:40.087253   25147 command_runner.go:130] > # The image used to instantiate infra containers.
	I1127 23:54:40.087263   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:40.087273   25147 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1127 23:54:40.087284   25147 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1127 23:54:40.087297   25147 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:54:40.087308   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:54:40.087316   25147 command_runner.go:130] > # pause_image_auth_file = ""
	I1127 23:54:40.087325   25147 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1127 23:54:40.087334   25147 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1127 23:54:40.087344   25147 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1127 23:54:40.087353   25147 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1127 23:54:40.087359   25147 command_runner.go:130] > # pause_command = "/pause"
	I1127 23:54:40.087369   25147 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1127 23:54:40.087381   25147 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1127 23:54:40.087392   25147 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1127 23:54:40.087402   25147 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1127 23:54:40.087410   25147 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1127 23:54:40.087416   25147 command_runner.go:130] > # signature_policy = ""
	I1127 23:54:40.087426   25147 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1127 23:54:40.087434   25147 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1127 23:54:40.087440   25147 command_runner.go:130] > # changing them here.
	I1127 23:54:40.087447   25147 command_runner.go:130] > # insecure_registries = [
	I1127 23:54:40.087453   25147 command_runner.go:130] > # ]
	I1127 23:54:40.087463   25147 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1127 23:54:40.087472   25147 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1127 23:54:40.087479   25147 command_runner.go:130] > # image_volumes = "mkdir"
	I1127 23:54:40.087488   25147 command_runner.go:130] > # Temporary directory to use for storing big files
	I1127 23:54:40.087499   25147 command_runner.go:130] > # big_files_temporary_dir = ""
	I1127 23:54:40.087510   25147 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1127 23:54:40.087521   25147 command_runner.go:130] > # CNI plugins.
	I1127 23:54:40.087528   25147 command_runner.go:130] > [crio.network]
	I1127 23:54:40.087569   25147 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1127 23:54:40.087581   25147 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1127 23:54:40.087588   25147 command_runner.go:130] > # cni_default_network = ""
	I1127 23:54:40.087606   25147 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1127 23:54:40.087616   25147 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1127 23:54:40.087626   25147 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1127 23:54:40.087636   25147 command_runner.go:130] > # plugin_dirs = [
	I1127 23:54:40.087646   25147 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1127 23:54:40.087655   25147 command_runner.go:130] > # ]
	I1127 23:54:40.087668   25147 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1127 23:54:40.087677   25147 command_runner.go:130] > [crio.metrics]
	I1127 23:54:40.087689   25147 command_runner.go:130] > # Globally enable or disable metrics support.
	I1127 23:54:40.087699   25147 command_runner.go:130] > enable_metrics = true
	I1127 23:54:40.087711   25147 command_runner.go:130] > # Specify enabled metrics collectors.
	I1127 23:54:40.087722   25147 command_runner.go:130] > # Per default all metrics are enabled.
	I1127 23:54:40.087737   25147 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1127 23:54:40.087750   25147 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1127 23:54:40.087764   25147 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1127 23:54:40.087773   25147 command_runner.go:130] > # metrics_collectors = [
	I1127 23:54:40.087781   25147 command_runner.go:130] > # 	"operations",
	I1127 23:54:40.087801   25147 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1127 23:54:40.087809   25147 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1127 23:54:40.087816   25147 command_runner.go:130] > # 	"operations_errors",
	I1127 23:54:40.087824   25147 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1127 23:54:40.087834   25147 command_runner.go:130] > # 	"image_pulls_by_name",
	I1127 23:54:40.087841   25147 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1127 23:54:40.087852   25147 command_runner.go:130] > # 	"image_pulls_failures",
	I1127 23:54:40.087862   25147 command_runner.go:130] > # 	"image_pulls_successes",
	I1127 23:54:40.087872   25147 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1127 23:54:40.087883   25147 command_runner.go:130] > # 	"image_layer_reuse",
	I1127 23:54:40.087893   25147 command_runner.go:130] > # 	"containers_oom_total",
	I1127 23:54:40.087903   25147 command_runner.go:130] > # 	"containers_oom",
	I1127 23:54:40.087913   25147 command_runner.go:130] > # 	"processes_defunct",
	I1127 23:54:40.087923   25147 command_runner.go:130] > # 	"operations_total",
	I1127 23:54:40.087934   25147 command_runner.go:130] > # 	"operations_latency_seconds",
	I1127 23:54:40.087944   25147 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1127 23:54:40.087954   25147 command_runner.go:130] > # 	"operations_errors_total",
	I1127 23:54:40.087964   25147 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1127 23:54:40.087974   25147 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1127 23:54:40.087981   25147 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1127 23:54:40.087991   25147 command_runner.go:130] > # 	"image_pulls_success_total",
	I1127 23:54:40.087999   25147 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1127 23:54:40.088008   25147 command_runner.go:130] > # 	"containers_oom_count_total",
	I1127 23:54:40.088016   25147 command_runner.go:130] > # ]
	I1127 23:54:40.088024   25147 command_runner.go:130] > # The port on which the metrics server will listen.
	I1127 23:54:40.088033   25147 command_runner.go:130] > # metrics_port = 9090
	I1127 23:54:40.088044   25147 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1127 23:54:40.088053   25147 command_runner.go:130] > # metrics_socket = ""
	I1127 23:54:40.088063   25147 command_runner.go:130] > # The certificate for the secure metrics server.
	I1127 23:54:40.088075   25147 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1127 23:54:40.088087   25147 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1127 23:54:40.088097   25147 command_runner.go:130] > # certificate on any modification event.
	I1127 23:54:40.088106   25147 command_runner.go:130] > # metrics_cert = ""
	I1127 23:54:40.088115   25147 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1127 23:54:40.088127   25147 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1127 23:54:40.088135   25147 command_runner.go:130] > # metrics_key = ""
	I1127 23:54:40.088148   25147 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1127 23:54:40.088158   25147 command_runner.go:130] > [crio.tracing]
	I1127 23:54:40.088170   25147 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1127 23:54:40.088179   25147 command_runner.go:130] > # enable_tracing = false
	I1127 23:54:40.088191   25147 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1127 23:54:40.088203   25147 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1127 23:54:40.088215   25147 command_runner.go:130] > # Number of samples to collect per million spans.
	I1127 23:54:40.088226   25147 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1127 23:54:40.088240   25147 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1127 23:54:40.088250   25147 command_runner.go:130] > [crio.stats]
	I1127 23:54:40.088263   25147 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1127 23:54:40.088274   25147 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1127 23:54:40.088285   25147 command_runner.go:130] > # stats_collection_period = 0
	I1127 23:54:40.089005   25147 command_runner.go:130] ! time="2023-11-27 23:54:40.072742353Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1127 23:54:40.089029   25147 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1127 23:54:40.089173   25147 cni.go:84] Creating CNI manager for ""
	I1127 23:54:40.089193   25147 cni.go:136] 1 nodes found, recommending kindnet
	I1127 23:54:40.089215   25147 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:54:40.089243   25147 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-883509 NodeName:multinode-883509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:54:40.089451   25147 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-883509"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:54:40.089541   25147 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-883509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:54:40.089606   25147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:54:40.097777   25147 command_runner.go:130] > kubeadm
	I1127 23:54:40.097793   25147 command_runner.go:130] > kubectl
	I1127 23:54:40.097800   25147 command_runner.go:130] > kubelet
	I1127 23:54:40.097822   25147 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:54:40.097875   25147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:54:40.105656   25147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1127 23:54:40.121052   25147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:54:40.136064   25147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1127 23:54:40.151307   25147 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1127 23:54:40.154625   25147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:54:40.166181   25147 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509 for IP: 192.168.39.159
	I1127 23:54:40.166210   25147 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:40.166383   25147 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1127 23:54:40.166444   25147 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1127 23:54:40.166507   25147 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key
	I1127 23:54:40.166524   25147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt with IP's: []
	I1127 23:54:40.560024   25147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt ...
	I1127 23:54:40.560060   25147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt: {Name:mk8cca606993cd25af4db8eb1ba854c647dd93c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:40.560251   25147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key ...
	I1127 23:54:40.560265   25147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key: {Name:mk4f6818f6997b6c986cbe829632a40c9978c319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:40.560347   25147 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key.b15c5797
	I1127 23:54:40.560363   25147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt.b15c5797 with IP's: [192.168.39.159 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:54:40.695358   25147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt.b15c5797 ...
	I1127 23:54:40.695389   25147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt.b15c5797: {Name:mk56f63c60fb1c2f824fea04e7914b676daa7930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:40.695541   25147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key.b15c5797 ...
	I1127 23:54:40.695553   25147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key.b15c5797: {Name:mk4f4db598f60446e9486e1c3b9b30d5fd17dd5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:40.695616   25147 certs.go:337] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt.b15c5797 -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt
	I1127 23:54:40.695679   25147 certs.go:341] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key.b15c5797 -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key
	I1127 23:54:40.695729   25147 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.key
	I1127 23:54:40.695741   25147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.crt with IP's: []
	I1127 23:54:40.855341   25147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.crt ...
	I1127 23:54:40.855369   25147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.crt: {Name:mkc9106d3b50fca6c35bb3283969f863610290ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:40.855514   25147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.key ...
	I1127 23:54:40.855530   25147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.key: {Name:mkfe873828b9d4df4b92e78a7404b1c26e407ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:54:40.855598   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 23:54:40.855615   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 23:54:40.855625   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 23:54:40.855638   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 23:54:40.855652   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:54:40.855666   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:54:40.855678   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:54:40.855690   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:54:40.855737   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1127 23:54:40.855769   25147 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1127 23:54:40.855779   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:54:40.855809   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:54:40.855832   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:54:40.855857   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1127 23:54:40.855901   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1127 23:54:40.855926   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:40.855939   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem -> /usr/share/ca-certificates/11930.pem
	I1127 23:54:40.855951   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /usr/share/ca-certificates/119302.pem
	I1127 23:54:40.856506   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:54:40.887489   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:54:40.910263   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:54:40.934102   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 23:54:40.956471   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:54:40.979032   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1127 23:54:41.002026   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:54:41.025080   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:54:41.047675   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:54:41.070211   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1127 23:54:41.092356   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1127 23:54:41.114051   25147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:54:41.129362   25147 ssh_runner.go:195] Run: openssl version
	I1127 23:54:41.134925   25147 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1127 23:54:41.134990   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:54:41.144676   25147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:41.149232   25147 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:41.149258   25147 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:41.149296   25147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:54:41.154227   25147 command_runner.go:130] > b5213941
	I1127 23:54:41.154271   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:54:41.163245   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1127 23:54:41.172110   25147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1127 23:54:41.176318   25147 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1127 23:54:41.176530   25147 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1127 23:54:41.176573   25147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1127 23:54:41.181537   25147 command_runner.go:130] > 51391683
	I1127 23:54:41.181853   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1127 23:54:41.190859   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1127 23:54:41.199747   25147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1127 23:54:41.204098   25147 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1127 23:54:41.204127   25147 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1127 23:54:41.204161   25147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1127 23:54:41.209077   25147 command_runner.go:130] > 3ec20f2e
	I1127 23:54:41.209359   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:54:41.217952   25147 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:54:41.221773   25147 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:54:41.221802   25147 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:54:41.221839   25147 kubeadm.go:404] StartCluster: {Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:54:41.221899   25147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1127 23:54:41.221941   25147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1127 23:54:41.257257   25147 cri.go:89] found id: ""
	I1127 23:54:41.257330   25147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:54:41.265860   25147 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1127 23:54:41.265891   25147 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1127 23:54:41.265898   25147 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1127 23:54:41.265977   25147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:54:41.274008   25147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:54:41.281752   25147 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1127 23:54:41.281781   25147 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1127 23:54:41.281788   25147 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1127 23:54:41.281796   25147 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:54:41.282019   25147 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:54:41.282066   25147 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1127 23:54:41.391283   25147 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 23:54:41.391308   25147 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1127 23:54:41.391351   25147 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:54:41.391360   25147 command_runner.go:130] > [preflight] Running pre-flight checks
	I1127 23:54:41.619857   25147 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:54:41.619907   25147 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:54:41.620020   25147 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:54:41.620033   25147 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:54:41.620156   25147 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:54:41.620164   25147 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:54:41.848717   25147 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:54:41.848796   25147 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:54:42.031513   25147 out.go:204]   - Generating certificates and keys ...
	I1127 23:54:42.031663   25147 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:54:42.031686   25147 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1127 23:54:42.031772   25147 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:54:42.031792   25147 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1127 23:54:42.245087   25147 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:54:42.245113   25147 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:54:42.431178   25147 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:54:42.431205   25147 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:54:42.501506   25147 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:54:42.501526   25147 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1127 23:54:42.592711   25147 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:54:42.592741   25147 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1127 23:54:42.676827   25147 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:54:42.676853   25147 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1127 23:54:42.677026   25147 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-883509] and IPs [192.168.39.159 127.0.0.1 ::1]
	I1127 23:54:42.677046   25147 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-883509] and IPs [192.168.39.159 127.0.0.1 ::1]
	I1127 23:54:42.808817   25147 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:54:42.808859   25147 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1127 23:54:42.809026   25147 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-883509] and IPs [192.168.39.159 127.0.0.1 ::1]
	I1127 23:54:42.809060   25147 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-883509] and IPs [192.168.39.159 127.0.0.1 ::1]
	I1127 23:54:42.962512   25147 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:54:42.962541   25147 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:54:43.048843   25147 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:54:43.048871   25147 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:54:43.108036   25147 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:54:43.108080   25147 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1127 23:54:43.108282   25147 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:54:43.108318   25147 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:54:43.317650   25147 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:54:43.317695   25147 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:54:43.628805   25147 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:54:43.628833   25147 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:54:43.801571   25147 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:54:43.801610   25147 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:54:43.960445   25147 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:54:43.960462   25147 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:54:44.029860   25147 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:54:44.029875   25147 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:54:44.029963   25147 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:54:44.030029   25147 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:54:44.092007   25147 out.go:204]   - Booting up control plane ...
	I1127 23:54:44.092122   25147 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:54:44.092136   25147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:54:44.092224   25147 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:54:44.092232   25147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:54:44.092302   25147 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:54:44.092330   25147 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:54:44.092450   25147 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:54:44.092475   25147 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:54:44.092626   25147 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:54:44.092637   25147 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:54:44.092706   25147 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1127 23:54:44.092719   25147 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:54:44.107434   25147 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:54:44.107473   25147 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:54:51.612604   25147 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.506900 seconds
	I1127 23:54:51.612627   25147 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.506900 seconds
	I1127 23:54:51.612785   25147 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:54:51.612802   25147 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:54:51.658596   25147 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:54:51.658625   25147 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:54:52.200187   25147 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:54:52.200239   25147 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:54:52.200500   25147 kubeadm.go:322] [mark-control-plane] Marking the node multinode-883509 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:54:52.200517   25147 command_runner.go:130] > [mark-control-plane] Marking the node multinode-883509 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:54:52.714593   25147 kubeadm.go:322] [bootstrap-token] Using token: eu9b8y.xkmpw53zblua8lpb
	I1127 23:54:52.716156   25147 out.go:204]   - Configuring RBAC rules ...
	I1127 23:54:52.714692   25147 command_runner.go:130] > [bootstrap-token] Using token: eu9b8y.xkmpw53zblua8lpb
	I1127 23:54:52.716284   25147 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:54:52.716298   25147 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:54:52.721989   25147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:54:52.722003   25147 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:54:52.729194   25147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:54:52.729212   25147 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:54:52.732556   25147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:54:52.732577   25147 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:54:52.739513   25147 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:54:52.739535   25147 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:54:52.744556   25147 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:54:52.744578   25147 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:54:52.759015   25147 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:54:52.759031   25147 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:54:52.973359   25147 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:54:52.973390   25147 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1127 23:54:53.144715   25147 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:54:53.144764   25147 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1127 23:54:53.145774   25147 kubeadm.go:322] 
	I1127 23:54:53.145848   25147 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:54:53.145861   25147 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1127 23:54:53.145867   25147 kubeadm.go:322] 
	I1127 23:54:53.145964   25147 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:54:53.145975   25147 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1127 23:54:53.145981   25147 kubeadm.go:322] 
	I1127 23:54:53.146025   25147 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:54:53.146034   25147 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1127 23:54:53.146121   25147 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:54:53.146129   25147 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:54:53.146173   25147 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:54:53.146179   25147 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:54:53.146182   25147 kubeadm.go:322] 
	I1127 23:54:53.146226   25147 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 23:54:53.146232   25147 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1127 23:54:53.146235   25147 kubeadm.go:322] 
	I1127 23:54:53.146349   25147 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:54:53.146365   25147 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:54:53.146407   25147 kubeadm.go:322] 
	I1127 23:54:53.146477   25147 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:54:53.146494   25147 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1127 23:54:53.146618   25147 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:54:53.146631   25147 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:54:53.146727   25147 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:54:53.146737   25147 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:54:53.146743   25147 kubeadm.go:322] 
	I1127 23:54:53.146858   25147 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:54:53.146869   25147 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:54:53.146980   25147 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:54:53.147008   25147 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1127 23:54:53.147016   25147 kubeadm.go:322] 
	I1127 23:54:53.147143   25147 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token eu9b8y.xkmpw53zblua8lpb \
	I1127 23:54:53.147166   25147 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token eu9b8y.xkmpw53zblua8lpb \
	I1127 23:54:53.147311   25147 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1127 23:54:53.147319   25147 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1127 23:54:53.147348   25147 kubeadm.go:322] 	--control-plane 
	I1127 23:54:53.147359   25147 command_runner.go:130] > 	--control-plane 
	I1127 23:54:53.147379   25147 kubeadm.go:322] 
	I1127 23:54:53.147492   25147 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:54:53.147514   25147 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:54:53.147521   25147 kubeadm.go:322] 
	I1127 23:54:53.147642   25147 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token eu9b8y.xkmpw53zblua8lpb \
	I1127 23:54:53.147657   25147 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token eu9b8y.xkmpw53zblua8lpb \
	I1127 23:54:53.147807   25147 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1127 23:54:53.147829   25147 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1127 23:54:53.148280   25147 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:54:53.148301   25147 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:54:53.148323   25147 cni.go:84] Creating CNI manager for ""
	I1127 23:54:53.148335   25147 cni.go:136] 1 nodes found, recommending kindnet
	I1127 23:54:53.150106   25147 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1127 23:54:53.151504   25147 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:54:53.170102   25147 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1127 23:54:53.170127   25147 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1127 23:54:53.170140   25147 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1127 23:54:53.170154   25147 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:54:53.170163   25147 command_runner.go:130] > Access: 2023-11-27 23:54:22.192018215 +0000
	I1127 23:54:53.170169   25147 command_runner.go:130] > Modify: 2023-11-27 22:54:55.000000000 +0000
	I1127 23:54:53.170174   25147 command_runner.go:130] > Change: 2023-11-27 23:54:20.360018215 +0000
	I1127 23:54:53.170178   25147 command_runner.go:130] >  Birth: -
	I1127 23:54:53.170241   25147 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 23:54:53.170258   25147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:54:53.209617   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:54:54.128197   25147 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1127 23:54:54.134633   25147 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1127 23:54:54.145560   25147 command_runner.go:130] > serviceaccount/kindnet created
	I1127 23:54:54.179019   25147 command_runner.go:130] > daemonset.apps/kindnet created
	I1127 23:54:54.181552   25147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:54:54.181684   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=multinode-883509 minikube.k8s.io/updated_at=2023_11_27T23_54_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:54.181686   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:54.217148   25147 command_runner.go:130] > -16
	I1127 23:54:54.392783   25147 command_runner.go:130] > node/multinode-883509 labeled
	I1127 23:54:54.392835   25147 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1127 23:54:54.392916   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:54.392919   25147 ops.go:34] apiserver oom_adj: -16
	I1127 23:54:54.500651   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:54.501067   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:54.590865   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:55.091748   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:55.178900   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:55.591401   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:55.670664   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:56.091682   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:56.177776   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:56.591336   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:56.671505   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:57.091936   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:57.181647   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:57.591176   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:57.676016   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:58.091156   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:58.176132   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:58.591389   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:58.676480   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:59.091472   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:59.184147   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:54:59.591780   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:54:59.679317   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:00.091947   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:00.177104   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:00.591866   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:00.673849   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:01.091863   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:01.177348   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:01.591284   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:01.678643   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:02.091239   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:02.180912   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:02.591435   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:02.690222   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:03.091798   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:03.185017   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:03.591263   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:03.699364   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:04.091705   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:04.173963   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:04.591731   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:04.668640   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:05.091719   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:05.182077   25147 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1127 23:55:05.591974   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:55:05.741696   25147 command_runner.go:130] > NAME      SECRETS   AGE
	I1127 23:55:05.741720   25147 command_runner.go:130] > default   0         0s
	I1127 23:55:05.741754   25147 kubeadm.go:1081] duration metric: took 11.560119622s to wait for elevateKubeSystemPrivileges.
	I1127 23:55:05.741769   25147 kubeadm.go:406] StartCluster complete in 24.519933507s
	I1127 23:55:05.741782   25147 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:55:05.741849   25147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:55:05.742502   25147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:55:05.742729   25147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:55:05.742877   25147 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 23:55:05.742969   25147 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:55:05.742977   25147 addons.go:69] Setting default-storageclass=true in profile "multinode-883509"
	I1127 23:55:05.742998   25147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-883509"
	I1127 23:55:05.743074   25147 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:55:05.742968   25147 addons.go:69] Setting storage-provisioner=true in profile "multinode-883509"
	I1127 23:55:05.743115   25147 addons.go:231] Setting addon storage-provisioner=true in "multinode-883509"
	I1127 23:55:05.743193   25147 host.go:66] Checking if "multinode-883509" exists ...
	I1127 23:55:05.743437   25147 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:55:05.743540   25147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:55:05.743554   25147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:55:05.743591   25147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:55:05.743673   25147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:55:05.745421   25147 round_trippers.go:463] GET https://192.168.39.159:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:55:05.745439   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:05.745451   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:05.745475   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:05.745684   25147 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 23:55:05.761034   25147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I1127 23:55:05.761431   25147 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:55:05.761931   25147 main.go:141] libmachine: Using API Version  1
	I1127 23:55:05.761971   25147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:55:05.762280   25147 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:55:05.762804   25147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:55:05.762825   25147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33319
	I1127 23:55:05.762841   25147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:55:05.763189   25147 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:55:05.763666   25147 main.go:141] libmachine: Using API Version  1
	I1127 23:55:05.763683   25147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:55:05.764038   25147 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:55:05.764223   25147 main.go:141] libmachine: (multinode-883509) Calling .GetState
	I1127 23:55:05.766406   25147 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:55:05.766715   25147 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:55:05.767003   25147 addons.go:231] Setting addon default-storageclass=true in "multinode-883509"
	I1127 23:55:05.767038   25147 host.go:66] Checking if "multinode-883509" exists ...
	I1127 23:55:05.767412   25147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:55:05.767443   25147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:55:05.767645   25147 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1127 23:55:05.767665   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:05.767674   25147 round_trippers.go:580]     Content-Length: 291
	I1127 23:55:05.767683   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:05 GMT
	I1127 23:55:05.767691   25147 round_trippers.go:580]     Audit-Id: 9edca45c-98ab-4c9f-be0d-44c36be23b23
	I1127 23:55:05.767701   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:05.767713   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:05.767727   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:05.767738   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:05.767765   25147 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6e7cc9d5-ec42-4b16-9afb-9c3b43521ec6","resourceVersion":"391","creationTimestamp":"2023-11-27T23:54:52Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 23:55:05.768142   25147 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6e7cc9d5-ec42-4b16-9afb-9c3b43521ec6","resourceVersion":"391","creationTimestamp":"2023-11-27T23:54:52Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 23:55:05.768195   25147 round_trippers.go:463] PUT https://192.168.39.159:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:55:05.768207   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:05.768221   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:05.768232   25147 round_trippers.go:473]     Content-Type: application/json
	I1127 23:55:05.768237   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:05.777112   25147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46741
	I1127 23:55:05.777521   25147 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:55:05.777950   25147 main.go:141] libmachine: Using API Version  1
	I1127 23:55:05.777969   25147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:55:05.778260   25147 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:55:05.778458   25147 main.go:141] libmachine: (multinode-883509) Calling .GetState
	I1127 23:55:05.780001   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:55:05.781950   25147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:55:05.784009   25147 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:55:05.784027   25147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:55:05.784050   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:55:05.785349   25147 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1127 23:55:05.785366   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:05.785376   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:05 GMT
	I1127 23:55:05.785384   25147 round_trippers.go:580]     Audit-Id: 789342b5-63bc-404d-a33c-7dd67b412191
	I1127 23:55:05.785392   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:05.785405   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:05.785422   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:05.785433   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:05.785441   25147 round_trippers.go:580]     Content-Length: 291
	I1127 23:55:05.785581   25147 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6e7cc9d5-ec42-4b16-9afb-9c3b43521ec6","resourceVersion":"393","creationTimestamp":"2023-11-27T23:54:52Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 23:55:05.785758   25147 round_trippers.go:463] GET https://192.168.39.159:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:55:05.785773   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:05.785782   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:05.785795   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:05.786959   25147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43865
	I1127 23:55:05.787265   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:55:05.787341   25147 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:55:05.787810   25147 main.go:141] libmachine: Using API Version  1
	I1127 23:55:05.787829   25147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:55:05.787864   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:55:05.787889   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:55:05.787895   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:55:05.788070   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:55:05.788173   25147 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:55:05.788202   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:55:05.788338   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1127 23:55:05.788785   25147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:55:05.788818   25147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:55:05.801792   25147 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1127 23:55:05.801818   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:05.801829   25147 round_trippers.go:580]     Content-Length: 291
	I1127 23:55:05.801838   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:05 GMT
	I1127 23:55:05.801846   25147 round_trippers.go:580]     Audit-Id: ba286a68-ae8b-479f-9bc6-c4bac433820e
	I1127 23:55:05.801855   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:05.801863   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:05.801873   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:05.801884   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:05.801912   25147 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6e7cc9d5-ec42-4b16-9afb-9c3b43521ec6","resourceVersion":"393","creationTimestamp":"2023-11-27T23:54:52Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1127 23:55:05.802015   25147 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-883509" context rescaled to 1 replicas
	I1127 23:55:05.802052   25147 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1127 23:55:05.804769   25147 out.go:177] * Verifying Kubernetes components...
	I1127 23:55:05.802775   25147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42485
	I1127 23:55:05.806408   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:55:05.806676   25147 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:55:05.807243   25147 main.go:141] libmachine: Using API Version  1
	I1127 23:55:05.807267   25147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:55:05.807589   25147 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:55:05.807762   25147 main.go:141] libmachine: (multinode-883509) Calling .GetState
	I1127 23:55:05.809396   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:55:05.809629   25147 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:55:05.809646   25147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:55:05.809658   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:55:05.812470   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:55:05.812945   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:55:05.812981   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:55:05.813112   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:55:05.813238   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:55:05.813334   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:55:05.813422   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1127 23:55:05.924252   25147 command_runner.go:130] > apiVersion: v1
	I1127 23:55:05.924272   25147 command_runner.go:130] > data:
	I1127 23:55:05.924276   25147 command_runner.go:130] >   Corefile: |
	I1127 23:55:05.924280   25147 command_runner.go:130] >     .:53 {
	I1127 23:55:05.924284   25147 command_runner.go:130] >         errors
	I1127 23:55:05.924291   25147 command_runner.go:130] >         health {
	I1127 23:55:05.924295   25147 command_runner.go:130] >            lameduck 5s
	I1127 23:55:05.924298   25147 command_runner.go:130] >         }
	I1127 23:55:05.924302   25147 command_runner.go:130] >         ready
	I1127 23:55:05.924322   25147 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1127 23:55:05.924326   25147 command_runner.go:130] >            pods insecure
	I1127 23:55:05.924332   25147 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1127 23:55:05.924337   25147 command_runner.go:130] >            ttl 30
	I1127 23:55:05.924341   25147 command_runner.go:130] >         }
	I1127 23:55:05.924345   25147 command_runner.go:130] >         prometheus :9153
	I1127 23:55:05.924350   25147 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1127 23:55:05.924356   25147 command_runner.go:130] >            max_concurrent 1000
	I1127 23:55:05.924363   25147 command_runner.go:130] >         }
	I1127 23:55:05.924367   25147 command_runner.go:130] >         cache 30
	I1127 23:55:05.924374   25147 command_runner.go:130] >         loop
	I1127 23:55:05.924378   25147 command_runner.go:130] >         reload
	I1127 23:55:05.924382   25147 command_runner.go:130] >         loadbalance
	I1127 23:55:05.924385   25147 command_runner.go:130] >     }
	I1127 23:55:05.924390   25147 command_runner.go:130] > kind: ConfigMap
	I1127 23:55:05.924393   25147 command_runner.go:130] > metadata:
	I1127 23:55:05.924400   25147 command_runner.go:130] >   creationTimestamp: "2023-11-27T23:54:52Z"
	I1127 23:55:05.924407   25147 command_runner.go:130] >   name: coredns
	I1127 23:55:05.924411   25147 command_runner.go:130] >   namespace: kube-system
	I1127 23:55:05.924415   25147 command_runner.go:130] >   resourceVersion: "269"
	I1127 23:55:05.924420   25147 command_runner.go:130] >   uid: d6785235-40c2-4ba1-9508-c3c6363ea59f
	I1127 23:55:05.925791   25147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:55:05.926211   25147 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:55:05.926647   25147 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:55:05.927015   25147 node_ready.go:35] waiting up to 6m0s for node "multinode-883509" to be "Ready" ...
	I1127 23:55:05.927112   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:05.927124   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:05.927135   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:05.927145   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:05.933129   25147 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1127 23:55:05.933151   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:05.933158   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:05.933164   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:05.933170   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:05 GMT
	I1127 23:55:05.933175   25147 round_trippers.go:580]     Audit-Id: 7de0b8db-07c2-4e71-99e1-fdeece2f5e11
	I1127 23:55:05.933180   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:05.933185   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:05.933348   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:05.934124   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:05.934143   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:05.934153   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:05.934162   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:05.945256   25147 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1127 23:55:05.945279   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:05.945286   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:05.945291   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:05.945297   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:05.945301   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:05 GMT
	I1127 23:55:05.945306   25147 round_trippers.go:580]     Audit-Id: 7f66eac6-5005-4bfa-b477-47a52ffb6db7
	I1127 23:55:05.945313   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:05.945449   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:06.105687   25147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:55:06.112871   25147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:55:06.446271   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:06.446303   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:06.446315   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:06.446323   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:06.457879   25147 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1127 23:55:06.457918   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:06.457930   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:06.457940   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:06.457949   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:06.457958   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:06 GMT
	I1127 23:55:06.457968   25147 round_trippers.go:580]     Audit-Id: 26c50470-4274-4eb8-b817-62e3e84cbf6b
	I1127 23:55:06.457981   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:06.458142   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:06.619054   25147 command_runner.go:130] > configmap/coredns replaced
	I1127 23:55:06.619089   25147 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1127 23:55:06.909534   25147 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1127 23:55:06.925892   25147 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1127 23:55:06.938036   25147 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1127 23:55:06.946301   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:06.946327   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:06.946340   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:06.946351   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:06.948390   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:06.948413   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:06.948423   25147 round_trippers.go:580]     Audit-Id: 84446329-1a77-481b-817c-ea6e50b806ad
	I1127 23:55:06.948431   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:06.948447   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:06.948462   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:06.948474   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:06.948483   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:06 GMT
	I1127 23:55:06.948674   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:06.958543   25147 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1127 23:55:06.968851   25147 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1127 23:55:06.979244   25147 command_runner.go:130] > pod/storage-provisioner created
	I1127 23:55:06.981777   25147 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1127 23:55:06.981792   25147 main.go:141] libmachine: Making call to close driver server
	I1127 23:55:06.981811   25147 main.go:141] libmachine: (multinode-883509) Calling .Close
	I1127 23:55:06.981817   25147 main.go:141] libmachine: Making call to close driver server
	I1127 23:55:06.981828   25147 main.go:141] libmachine: (multinode-883509) Calling .Close
	I1127 23:55:06.982083   25147 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:55:06.982099   25147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:55:06.982108   25147 main.go:141] libmachine: Making call to close driver server
	I1127 23:55:06.982121   25147 main.go:141] libmachine: (multinode-883509) Calling .Close
	I1127 23:55:06.982180   25147 main.go:141] libmachine: (multinode-883509) DBG | Closing plugin on server side
	I1127 23:55:06.982200   25147 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:55:06.982209   25147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:55:06.982221   25147 main.go:141] libmachine: Making call to close driver server
	I1127 23:55:06.982229   25147 main.go:141] libmachine: (multinode-883509) Calling .Close
	I1127 23:55:06.982316   25147 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:55:06.982329   25147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:55:06.983573   25147 main.go:141] libmachine: (multinode-883509) DBG | Closing plugin on server side
	I1127 23:55:06.983597   25147 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:55:06.983605   25147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:55:06.983690   25147 round_trippers.go:463] GET https://192.168.39.159:8443/apis/storage.k8s.io/v1/storageclasses
	I1127 23:55:06.983696   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:06.983706   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:06.983713   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:06.989528   25147 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1127 23:55:06.989545   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:06.989552   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:06.989558   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:06.989563   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:06.989568   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:06.989573   25147 round_trippers.go:580]     Content-Length: 1273
	I1127 23:55:06.989578   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:06 GMT
	I1127 23:55:06.989587   25147 round_trippers.go:580]     Audit-Id: 8c17074c-e661-41b5-9cdc-3ea6e7c0964c
	I1127 23:55:06.989631   25147 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"bae3a9b1-d378-4e00-88fb-a199b592d9e6","resourceVersion":"405","creationTimestamp":"2023-11-27T23:55:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1127 23:55:06.990124   25147 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"bae3a9b1-d378-4e00-88fb-a199b592d9e6","resourceVersion":"405","creationTimestamp":"2023-11-27T23:55:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1127 23:55:06.990190   25147 round_trippers.go:463] PUT https://192.168.39.159:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1127 23:55:06.990202   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:06.990211   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:06.990219   25147 round_trippers.go:473]     Content-Type: application/json
	I1127 23:55:06.990232   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:06.996891   25147 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1127 23:55:06.996918   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:06.996925   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:06 GMT
	I1127 23:55:06.996931   25147 round_trippers.go:580]     Audit-Id: 0a01b701-83c4-4535-80d1-abbfb9d7fc7d
	I1127 23:55:06.996936   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:06.996941   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:06.996946   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:06.996952   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:06.996960   25147 round_trippers.go:580]     Content-Length: 1220
	I1127 23:55:06.997002   25147 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"bae3a9b1-d378-4e00-88fb-a199b592d9e6","resourceVersion":"405","creationTimestamp":"2023-11-27T23:55:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-27T23:55:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1127 23:55:06.997169   25147 main.go:141] libmachine: Making call to close driver server
	I1127 23:55:06.997188   25147 main.go:141] libmachine: (multinode-883509) Calling .Close
	I1127 23:55:06.997521   25147 main.go:141] libmachine: (multinode-883509) DBG | Closing plugin on server side
	I1127 23:55:06.997533   25147 main.go:141] libmachine: Successfully made call to close driver server
	I1127 23:55:06.997548   25147 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 23:55:07.000443   25147 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1127 23:55:07.002018   25147 addons.go:502] enable addons completed in 1.259140441s: enabled=[storage-provisioner default-storageclass]
	I1127 23:55:07.446050   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:07.446072   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:07.446080   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:07.446086   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:07.448776   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:07.448796   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:07.448804   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:07.448815   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:07.448820   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:07 GMT
	I1127 23:55:07.448825   25147 round_trippers.go:580]     Audit-Id: f45aa5a3-dc0f-44d0-94d5-d1ac4b8a5810
	I1127 23:55:07.448830   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:07.448836   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:07.448982   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:07.946728   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:07.946756   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:07.946766   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:07.946776   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:07.949770   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:07.949797   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:07.949808   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:07.949813   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:07 GMT
	I1127 23:55:07.949819   25147 round_trippers.go:580]     Audit-Id: 4c270aa1-833e-48b6-94aa-4fff38ec2895
	I1127 23:55:07.949824   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:07.949829   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:07.949835   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:07.950036   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:07.950384   25147 node_ready.go:58] node "multinode-883509" has status "Ready":"False"
	I1127 23:55:08.446757   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:08.446787   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:08.446799   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.446809   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:08.449323   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:08.449346   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:08.449356   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:08.449364   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:08.449371   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.449380   25147 round_trippers.go:580]     Audit-Id: d8c44639-dee4-41af-bf53-12221aa62b63
	I1127 23:55:08.449387   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.449396   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.449573   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:08.946466   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:08.946485   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:08.946494   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:08.946500   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:08.949226   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:08.949248   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:08.949255   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:08.949261   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:08 GMT
	I1127 23:55:08.949266   25147 round_trippers.go:580]     Audit-Id: f12e8b76-5a09-44c7-8dfc-3cbf70720ae5
	I1127 23:55:08.949271   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:08.949279   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:08.949290   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:08.949464   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:09.446947   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:09.446994   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:09.447006   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.447016   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:09.451796   25147 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:09.451821   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:09.451830   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.451837   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:09.451844   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:09.451851   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.451859   25147 round_trippers.go:580]     Audit-Id: 799201ca-8514-4130-ae74-7b0c1f6fd453
	I1127 23:55:09.451869   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.452045   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:09.946785   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:09.946809   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:09.946817   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:09.946823   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:09.949556   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:09.949581   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:09.949587   25147 round_trippers.go:580]     Audit-Id: b1d1838a-ff18-414a-ba31-312adb3d1a84
	I1127 23:55:09.949593   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:09.949598   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:09.949604   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:09.949610   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:09.949618   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:09 GMT
	I1127 23:55:09.949940   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:10.446690   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:10.446720   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:10.446732   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.446741   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:10.452101   25147 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1127 23:55:10.452128   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:10.452137   25147 round_trippers.go:580]     Audit-Id: 12835e53-e2b3-4922-af64-98a5c22cd246
	I1127 23:55:10.452144   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.452151   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.452157   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:10.452164   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:10.452171   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.452506   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"356","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1127 23:55:10.452828   25147 node_ready.go:58] node "multinode-883509" has status "Ready":"False"
	I1127 23:55:10.946966   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:10.946991   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:10.947002   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.947012   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:10.949989   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:10.950014   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:10.950023   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.950031   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.950038   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:10.950045   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:10.950054   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.950062   25147 round_trippers.go:580]     Audit-Id: 64545efb-2065-4b29-a8fc-a1b2fdc3c679
	I1127 23:55:10.950285   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:10.950687   25147 node_ready.go:49] node "multinode-883509" has status "Ready":"True"
	I1127 23:55:10.950708   25147 node_ready.go:38] duration metric: took 5.02367348s waiting for node "multinode-883509" to be "Ready" ...
	I1127 23:55:10.950720   25147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:55:10.950821   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:10.950833   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:10.950844   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.950855   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:10.961483   25147 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1127 23:55:10.961509   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:10.961520   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.961529   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.961538   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:10.961567   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:10.961582   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.961591   25147 round_trippers.go:580]     Audit-Id: 170cada4-5cff-4946-9411-64f66d8f8842
	I1127 23:55:10.964573   25147 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"434","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54818 chars]
	I1127 23:55:10.968984   25147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:10.969069   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1127 23:55:10.969082   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:10.969094   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.969111   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:10.971682   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:10.971703   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:10.971713   25147 round_trippers.go:580]     Audit-Id: 081ed0a3-78e3-432b-bc07-5722e5fefada
	I1127 23:55:10.971721   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.971728   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.971737   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:10.971753   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:10.971762   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.972063   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"434","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1127 23:55:10.972606   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:10.972625   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:10.972636   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.972645   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:10.977572   25147 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:10.977595   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:10.977602   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:10.977608   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.977613   25147 round_trippers.go:580]     Audit-Id: 88b77fbc-2d2f-43e4-8330-8e467bbc0ea8
	I1127 23:55:10.977618   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.977627   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.977632   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:10.977793   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:10.978292   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1127 23:55:10.978313   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:10.978323   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.978332   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:10.980308   25147 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:55:10.980326   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:10.980335   25147 round_trippers.go:580]     Audit-Id: cc13e7dd-d33c-43b5-b921-0223f0bf50a0
	I1127 23:55:10.980343   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.980354   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.980363   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:10.980371   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:10.980389   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.980552   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"434","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1127 23:55:10.981044   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:10.981060   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:10.981070   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:10.981079   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:10.983109   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:10.983124   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:10.983133   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:10.983141   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:10.983149   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:10.983169   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:10 GMT
	I1127 23:55:10.983178   25147 round_trippers.go:580]     Audit-Id: 0e1fbaf0-e5d0-4aa4-87bd-ebfd6c2611cd
	I1127 23:55:10.983187   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:10.983306   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:11.484244   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1127 23:55:11.484276   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:11.484288   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:11.484297   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:11.488655   25147 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:11.488684   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:11.488691   25147 round_trippers.go:580]     Audit-Id: 2df92ac7-bdb6-478b-9ffd-74a536a2f4c6
	I1127 23:55:11.488697   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:11.488702   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:11.488711   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:11.488716   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:11.488723   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:11 GMT
	I1127 23:55:11.489195   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"434","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1127 23:55:11.489615   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:11.489628   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:11.489635   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:11.489641   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:11.493187   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:11.493208   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:11.493217   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:11.493226   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:11 GMT
	I1127 23:55:11.493237   25147 round_trippers.go:580]     Audit-Id: 91a478c5-26f8-424e-ac40-990f616df148
	I1127 23:55:11.493245   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:11.493252   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:11.493265   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:11.493410   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:11.984046   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1127 23:55:11.984072   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:11.984081   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:11.984087   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:11.987324   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:11.987346   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:11.987352   25147 round_trippers.go:580]     Audit-Id: f08ed90a-a616-4a95-908d-b6527c867e1d
	I1127 23:55:11.987358   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:11.987365   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:11.987370   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:11.987376   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:11.987381   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:11 GMT
	I1127 23:55:11.987591   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"434","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1127 23:55:11.988133   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:11.988153   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:11.988165   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:11.988174   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:11.991249   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:11.991265   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:11.991274   25147 round_trippers.go:580]     Audit-Id: 3ed8c61e-cf63-4fb6-b39b-8d8dce7ea8d9
	I1127 23:55:11.991283   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:11.991291   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:11.991300   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:11.991318   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:11.991324   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:11 GMT
	I1127 23:55:11.991767   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:12.484515   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1127 23:55:12.484547   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:12.484559   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:12.484569   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:12.487455   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:12.487477   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:12.487487   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:12 GMT
	I1127 23:55:12.487496   25147 round_trippers.go:580]     Audit-Id: 2f026475-a5cc-44d7-9485-21bf7362c7d8
	I1127 23:55:12.487503   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:12.487512   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:12.487519   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:12.487528   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:12.488181   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"445","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1127 23:55:12.488627   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:12.488641   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:12.488648   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:12.488654   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:12.491144   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:12.491162   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:12.491168   25147 round_trippers.go:580]     Audit-Id: 7ab48af2-1974-4831-a01c-659c965c9ed4
	I1127 23:55:12.491173   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:12.491178   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:12.491184   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:12.491191   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:12.491202   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:12 GMT
	I1127 23:55:12.491788   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:12.492106   25147 pod_ready.go:92] pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:12.492128   25147 pod_ready.go:81] duration metric: took 1.523113427s waiting for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:12.492140   25147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:12.492193   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1127 23:55:12.492204   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:12.492214   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:12.492227   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:12.494284   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:12.494304   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:12.494314   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:12.494322   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:12 GMT
	I1127 23:55:12.494330   25147 round_trippers.go:580]     Audit-Id: 5335a4dc-a850-45f9-9c2f-454a6f7fd6c0
	I1127 23:55:12.494339   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:12.494346   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:12.494358   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:12.494490   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"336","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I1127 23:55:12.494920   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:12.494936   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:12.494947   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:12.494956   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:12.497015   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:12.497036   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:12.497046   25147 round_trippers.go:580]     Audit-Id: 92b55605-c8d3-411e-bf5e-0a6015ba8514
	I1127 23:55:12.497054   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:12.497065   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:12.497073   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:12.497089   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:12.497096   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:12 GMT
	I1127 23:55:12.497214   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:12.497650   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1127 23:55:12.497668   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:12.497678   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:12.497687   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:12.499298   25147 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:55:12.499312   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:12.499318   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:12.499323   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:12 GMT
	I1127 23:55:12.499328   25147 round_trippers.go:580]     Audit-Id: f9a521cb-962c-4a35-854a-0ceeb82112af
	I1127 23:55:12.499344   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:12.499357   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:12.499365   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:12.499538   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"336","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I1127 23:55:12.499940   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:12.499956   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:12.499963   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:12.499969   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:12.501834   25147 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:55:12.501847   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:12.501853   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:12.501858   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:12.501863   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:12 GMT
	I1127 23:55:12.501868   25147 round_trippers.go:580]     Audit-Id: 943e2dab-11d1-4459-8c70-95ff68e4cae8
	I1127 23:55:12.501873   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:12.501878   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:12.502331   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:13.003279   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1127 23:55:13.003316   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.003329   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.003340   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.007726   25147 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:13.007758   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.007767   25147 round_trippers.go:580]     Audit-Id: 1b01830c-8508-4cc0-8b23-e87994d68f8a
	I1127 23:55:13.007777   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.007785   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.007794   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.007804   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.007812   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.008797   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"336","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6078 chars]
	I1127 23:55:13.009282   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:13.009301   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.009313   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.009323   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.011370   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:13.011386   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.011396   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.011405   25147 round_trippers.go:580]     Audit-Id: 8d4585b0-4261-411b-a845-bd43761b0653
	I1127 23:55:13.011414   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.011431   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.011439   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.011447   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.011597   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:13.503238   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1127 23:55:13.503269   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.503282   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.503291   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.506603   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:13.506632   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.506642   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.506649   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.506656   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.506664   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.506671   25147 round_trippers.go:580]     Audit-Id: 46ac26dd-708b-4df2-ba1d-673bd3112529
	I1127 23:55:13.506680   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.506885   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"451","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1127 23:55:13.507379   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:13.507402   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.507414   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.507433   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.509902   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:13.509926   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.509937   25147 round_trippers.go:580]     Audit-Id: 35aaf51e-f970-4c56-a932-0fe7e0bca7b5
	I1127 23:55:13.509946   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.509956   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.509963   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.509971   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.509980   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.510109   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:13.510532   25147 pod_ready.go:92] pod "etcd-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:13.510553   25147 pod_ready.go:81] duration metric: took 1.018405225s waiting for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:13.510565   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:13.510614   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-883509
	I1127 23:55:13.510621   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.510628   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.510634   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.513058   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:13.513078   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.513087   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.513096   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.513113   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.513126   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.513132   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.513137   25147 round_trippers.go:580]     Audit-Id: c89d4d22-c025-4c66-ae88-27937b345053
	I1127 23:55:13.513336   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-883509","namespace":"kube-system","uid":"0a144c07-5db8-418a-ad15-110fabc7f377","resourceVersion":"452","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.159:8443","kubernetes.io/config.hash":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.mirror":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.seen":"2023-11-27T23:54:53.116543447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1127 23:55:13.513748   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:13.513764   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.513771   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.513777   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.515955   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:13.515972   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.515978   25147 round_trippers.go:580]     Audit-Id: 3895f8d0-cdf0-4805-ab6f-1e5f44fac8db
	I1127 23:55:13.515985   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.515993   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.516010   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.516019   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.516031   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.516271   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:13.516621   25147 pod_ready.go:92] pod "kube-apiserver-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:13.516638   25147 pod_ready.go:81] duration metric: took 6.066093ms waiting for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:13.516646   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:13.548022   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-883509
	I1127 23:55:13.548049   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.548062   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.548076   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.550915   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:13.550939   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.550948   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.550955   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.550963   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.550971   25147 round_trippers.go:580]     Audit-Id: 7ac5b1a3-f9fe-42b9-aa02-817cd392440f
	I1127 23:55:13.550980   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.550988   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.551187   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-883509","namespace":"kube-system","uid":"f8474e48-c333-4772-ae1f-59cdb2bf95eb","resourceVersion":"450","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.mirror":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.seen":"2023-11-27T23:54:53.116544230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1127 23:55:13.747016   25147 request.go:629] Waited for 195.328881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:13.747084   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:13.747090   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.747098   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.747110   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.749947   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:13.749973   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.749983   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.749992   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.750001   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.750013   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.750023   25147 round_trippers.go:580]     Audit-Id: f76c17b5-f83d-4789-a739-b1990682daa3
	I1127 23:55:13.750034   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.750230   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:13.750548   25147 pod_ready.go:92] pod "kube-controller-manager-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:13.750564   25147 pod_ready.go:81] duration metric: took 233.911536ms waiting for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:13.750575   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:13.947887   25147 request.go:629] Waited for 197.244364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1127 23:55:13.947943   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1127 23:55:13.947949   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:13.947956   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:13.947963   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:13.951185   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:13.951210   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:13.951221   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:13.951228   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:13.951236   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:13.951244   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:13.951251   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:13 GMT
	I1127 23:55:13.951261   25147 round_trippers.go:580]     Audit-Id: 6eabe23b-457e-47d1-a302-94d04f7be354
	I1127 23:55:13.951697   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7g246","generateName":"kube-proxy-","namespace":"kube-system","uid":"c03a2053-f013-4269-a5e1-0acfebfc606c","resourceVersion":"417","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1127 23:55:14.147541   25147 request.go:629] Waited for 195.301484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:14.147601   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:14.147606   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:14.147614   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:14.147621   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:14.150533   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:14.150560   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:14.150575   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:14.150583   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:14 GMT
	I1127 23:55:14.150591   25147 round_trippers.go:580]     Audit-Id: fcf66d41-bdc1-471f-aa1d-1e4481423db2
	I1127 23:55:14.150598   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:14.150606   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:14.150613   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:14.150950   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:14.151360   25147 pod_ready.go:92] pod "kube-proxy-7g246" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:14.151382   25147 pod_ready.go:81] duration metric: took 400.798089ms waiting for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:14.151395   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:14.347907   25147 request.go:629] Waited for 196.432506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1127 23:55:14.348002   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1127 23:55:14.348010   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:14.348018   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:14.348025   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:14.351124   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:14.351149   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:14.351156   25147 round_trippers.go:580]     Audit-Id: 483e8cf1-df36-4db4-97b5-a6961beafbfe
	I1127 23:55:14.351172   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:14.351178   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:14.351183   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:14.351189   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:14.351194   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:14 GMT
	I1127 23:55:14.351387   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-883509","namespace":"kube-system","uid":"191f6a8c-7604-4f03-ba5a-d717b27f634b","resourceVersion":"453","creationTimestamp":"2023-11-27T23:54:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.mirror":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.seen":"2023-11-27T23:54:44.598174974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1127 23:55:14.547045   25147 request.go:629] Waited for 195.294736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:14.547122   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:14.547130   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:14.547138   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:14.547154   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:14.550118   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:14.550146   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:14.550156   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:14.550163   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:14.550171   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:14.550179   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:14 GMT
	I1127 23:55:14.550187   25147 round_trippers.go:580]     Audit-Id: 2c0450c8-2b96-4d1c-9057-cf66779714c4
	I1127 23:55:14.550206   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:14.550629   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:14.550995   25147 pod_ready.go:92] pod "kube-scheduler-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:14.551013   25147 pod_ready.go:81] duration metric: took 399.609484ms waiting for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:14.551025   25147 pod_ready.go:38] duration metric: took 3.600269877s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:55:14.551048   25147 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:55:14.551098   25147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:55:14.565589   25147 command_runner.go:130] > 1095
	I1127 23:55:14.565830   25147 api_server.go:72] duration metric: took 8.763746824s to wait for apiserver process to appear ...
	I1127 23:55:14.565852   25147 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:55:14.565870   25147 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1127 23:55:14.571072   25147 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1127 23:55:14.571137   25147 round_trippers.go:463] GET https://192.168.39.159:8443/version
	I1127 23:55:14.571150   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:14.571158   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:14.571167   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:14.572225   25147 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:55:14.572244   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:14.572254   25147 round_trippers.go:580]     Content-Length: 264
	I1127 23:55:14.572263   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:14 GMT
	I1127 23:55:14.572271   25147 round_trippers.go:580]     Audit-Id: bc6a1a0f-6d4f-4183-a9c7-4a05565cdabf
	I1127 23:55:14.572279   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:14.572289   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:14.572302   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:14.572311   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:14.572346   25147 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1127 23:55:14.572431   25147 api_server.go:141] control plane version: v1.28.4
	I1127 23:55:14.572451   25147 api_server.go:131] duration metric: took 6.591181ms to wait for apiserver health ...
	I1127 23:55:14.572461   25147 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:55:14.747885   25147 request.go:629] Waited for 175.345558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:14.747940   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:14.747945   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:14.747957   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:14.747963   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:14.751884   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:14.751912   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:14.751921   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:14.751929   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:14.751942   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:14.751949   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:14.751960   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:14 GMT
	I1127 23:55:14.751968   25147 round_trippers.go:580]     Audit-Id: 9a33efea-635b-448d-8ce9-89d9ad26f455
	I1127 23:55:14.752950   25147 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"445","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1127 23:55:14.755296   25147 system_pods.go:59] 8 kube-system pods found
	I1127 23:55:14.755332   25147 system_pods.go:61] "coredns-5dd5756b68-9vws5" [66ac3c18-9997-49aa-a154-ade69c138f12] Running
	I1127 23:55:14.755343   25147 system_pods.go:61] "etcd-multinode-883509" [58bb8943-0a7c-4d4c-a090-ea8de587f504] Running
	I1127 23:55:14.755349   25147 system_pods.go:61] "kindnet-ztt77" [acbfe061-9a56-4999-baed-ef8d73dc9222] Running
	I1127 23:55:14.755356   25147 system_pods.go:61] "kube-apiserver-multinode-883509" [0a144c07-5db8-418a-ad15-110fabc7f377] Running
	I1127 23:55:14.755366   25147 system_pods.go:61] "kube-controller-manager-multinode-883509" [f8474e48-c333-4772-ae1f-59cdb2bf95eb] Running
	I1127 23:55:14.755377   25147 system_pods.go:61] "kube-proxy-7g246" [c03a2053-f013-4269-a5e1-0acfebfc606c] Running
	I1127 23:55:14.755384   25147 system_pods.go:61] "kube-scheduler-multinode-883509" [191f6a8c-7604-4f03-ba5a-d717b27f634b] Running
	I1127 23:55:14.755394   25147 system_pods.go:61] "storage-provisioner" [e59cdfcb-f7c6-4be9-a2e1-0931d582343c] Running
	I1127 23:55:14.755405   25147 system_pods.go:74] duration metric: took 182.935301ms to wait for pod list to return data ...
	I1127 23:55:14.755417   25147 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:55:14.947914   25147 request.go:629] Waited for 192.408079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:55:14.947971   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:55:14.947976   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:14.947983   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:14.947989   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:14.951317   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:14.951343   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:14.951353   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:14.951362   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:14.951370   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:14.951379   25147 round_trippers.go:580]     Content-Length: 261
	I1127 23:55:14.951387   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:14 GMT
	I1127 23:55:14.951404   25147 round_trippers.go:580]     Audit-Id: 5a1ba79e-f075-4405-be07-f4215261fcea
	I1127 23:55:14.951412   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:14.951440   25147 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"da7f4830-e8a5-4bf2-be22-fac9b3c7bd70","resourceVersion":"358","creationTimestamp":"2023-11-27T23:55:05Z"}}]}
	I1127 23:55:14.951695   25147 default_sa.go:45] found service account: "default"
	I1127 23:55:14.951734   25147 default_sa.go:55] duration metric: took 196.30689ms for default service account to be created ...
	I1127 23:55:14.951745   25147 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:55:15.147163   25147 request.go:629] Waited for 195.338982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:15.147240   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:15.147246   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:15.147254   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:15.147260   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:15.151368   25147 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:15.151394   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:15.151401   25147 round_trippers.go:580]     Audit-Id: d4250bda-79a5-4261-88d0-63ed20d1d869
	I1127 23:55:15.151407   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:15.151412   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:15.151417   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:15.151422   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:15.151428   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:15 GMT
	I1127 23:55:15.152796   25147 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"445","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1127 23:55:15.155123   25147 system_pods.go:86] 8 kube-system pods found
	I1127 23:55:15.155149   25147 system_pods.go:89] "coredns-5dd5756b68-9vws5" [66ac3c18-9997-49aa-a154-ade69c138f12] Running
	I1127 23:55:15.155156   25147 system_pods.go:89] "etcd-multinode-883509" [58bb8943-0a7c-4d4c-a090-ea8de587f504] Running
	I1127 23:55:15.155161   25147 system_pods.go:89] "kindnet-ztt77" [acbfe061-9a56-4999-baed-ef8d73dc9222] Running
	I1127 23:55:15.155170   25147 system_pods.go:89] "kube-apiserver-multinode-883509" [0a144c07-5db8-418a-ad15-110fabc7f377] Running
	I1127 23:55:15.155181   25147 system_pods.go:89] "kube-controller-manager-multinode-883509" [f8474e48-c333-4772-ae1f-59cdb2bf95eb] Running
	I1127 23:55:15.155188   25147 system_pods.go:89] "kube-proxy-7g246" [c03a2053-f013-4269-a5e1-0acfebfc606c] Running
	I1127 23:55:15.155198   25147 system_pods.go:89] "kube-scheduler-multinode-883509" [191f6a8c-7604-4f03-ba5a-d717b27f634b] Running
	I1127 23:55:15.155207   25147 system_pods.go:89] "storage-provisioner" [e59cdfcb-f7c6-4be9-a2e1-0931d582343c] Running
	I1127 23:55:15.155222   25147 system_pods.go:126] duration metric: took 203.465736ms to wait for k8s-apps to be running ...
	I1127 23:55:15.155230   25147 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:55:15.155283   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:55:15.167943   25147 system_svc.go:56] duration metric: took 12.70462ms WaitForService to wait for kubelet.
	I1127 23:55:15.167971   25147 kubeadm.go:581] duration metric: took 9.365889017s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:55:15.167993   25147 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:55:15.347455   25147 request.go:629] Waited for 179.389447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I1127 23:55:15.347519   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I1127 23:55:15.347524   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:15.347532   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:15.347538   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:15.350211   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:15.350236   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:15.350248   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:15.350257   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:15 GMT
	I1127 23:55:15.350268   25147 round_trippers.go:580]     Audit-Id: e2f0dedf-85c2-4abb-9545-7a613915e44d
	I1127 23:55:15.350276   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:15.350288   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:15.350296   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:15.350487   25147 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I1127 23:55:15.350846   25147 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 23:55:15.350869   25147 node_conditions.go:123] node cpu capacity is 2
	I1127 23:55:15.350879   25147 node_conditions.go:105] duration metric: took 182.881206ms to run NodePressure ...
	I1127 23:55:15.350889   25147 start.go:228] waiting for startup goroutines ...
	I1127 23:55:15.350895   25147 start.go:233] waiting for cluster config update ...
	I1127 23:55:15.350907   25147 start.go:242] writing updated cluster config ...
	I1127 23:55:15.353276   25147 out.go:177] 
	I1127 23:55:15.355047   25147 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:55:15.355130   25147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1127 23:55:15.357125   25147 out.go:177] * Starting worker node multinode-883509-m02 in cluster multinode-883509
	I1127 23:55:15.358746   25147 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:55:15.358768   25147 cache.go:56] Caching tarball of preloaded images
	I1127 23:55:15.358849   25147 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 23:55:15.358862   25147 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:55:15.358942   25147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1127 23:55:15.359134   25147 start.go:365] acquiring machines lock for multinode-883509-m02: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1127 23:55:15.359175   25147 start.go:369] acquired machines lock for "multinode-883509-m02" in 22.155µs
	I1127 23:55:15.359196   25147 start.go:93] Provisioning new machine with config: &{Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:55:15.359278   25147 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1127 23:55:15.361232   25147 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1127 23:55:15.361323   25147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:55:15.361346   25147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:55:15.375002   25147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42989
	I1127 23:55:15.375465   25147 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:55:15.375956   25147 main.go:141] libmachine: Using API Version  1
	I1127 23:55:15.375979   25147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:55:15.376267   25147 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:55:15.376458   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetMachineName
	I1127 23:55:15.376611   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:55:15.376789   25147 start.go:159] libmachine.API.Create for "multinode-883509" (driver="kvm2")
	I1127 23:55:15.376813   25147 client.go:168] LocalClient.Create starting
	I1127 23:55:15.376844   25147 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem
	I1127 23:55:15.376872   25147 main.go:141] libmachine: Decoding PEM data...
	I1127 23:55:15.376888   25147 main.go:141] libmachine: Parsing certificate...
	I1127 23:55:15.376947   25147 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem
	I1127 23:55:15.376965   25147 main.go:141] libmachine: Decoding PEM data...
	I1127 23:55:15.376977   25147 main.go:141] libmachine: Parsing certificate...
	I1127 23:55:15.376996   25147 main.go:141] libmachine: Running pre-create checks...
	I1127 23:55:15.377005   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .PreCreateCheck
	I1127 23:55:15.377174   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetConfigRaw
	I1127 23:55:15.377598   25147 main.go:141] libmachine: Creating machine...
	I1127 23:55:15.377612   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .Create
	I1127 23:55:15.377733   25147 main.go:141] libmachine: (multinode-883509-m02) Creating KVM machine...
	I1127 23:55:15.378990   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found existing default KVM network
	I1127 23:55:15.379085   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found existing private KVM network mk-multinode-883509
	I1127 23:55:15.379264   25147 main.go:141] libmachine: (multinode-883509-m02) Setting up store path in /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02 ...
	I1127 23:55:15.379296   25147 main.go:141] libmachine: (multinode-883509-m02) Building disk image from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso
	I1127 23:55:15.379334   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:15.379238   25500 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:55:15.379424   25147 main.go:141] libmachine: (multinode-883509-m02) Downloading /home/jenkins/minikube-integration/17206-4749/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso...
	I1127 23:55:15.577843   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:15.577696   25500 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa...
	I1127 23:55:15.729828   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:15.729707   25500 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/multinode-883509-m02.rawdisk...
	I1127 23:55:15.729880   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Writing magic tar header
	I1127 23:55:15.729894   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Writing SSH key tar header
	I1127 23:55:15.729907   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:15.729824   25500 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02 ...
	I1127 23:55:15.729923   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02
	I1127 23:55:15.729981   25147 main.go:141] libmachine: (multinode-883509-m02) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02 (perms=drwx------)
	I1127 23:55:15.730005   25147 main.go:141] libmachine: (multinode-883509-m02) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines (perms=drwxr-xr-x)
	I1127 23:55:15.730019   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines
	I1127 23:55:15.730033   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:55:15.730043   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749
	I1127 23:55:15.730053   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1127 23:55:15.730067   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Checking permissions on dir: /home/jenkins
	I1127 23:55:15.730080   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Checking permissions on dir: /home
	I1127 23:55:15.730090   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Skipping /home - not owner
	I1127 23:55:15.730106   25147 main.go:141] libmachine: (multinode-883509-m02) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube (perms=drwxr-xr-x)
	I1127 23:55:15.730122   25147 main.go:141] libmachine: (multinode-883509-m02) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749 (perms=drwxrwxr-x)
	I1127 23:55:15.730131   25147 main.go:141] libmachine: (multinode-883509-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1127 23:55:15.730140   25147 main.go:141] libmachine: (multinode-883509-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1127 23:55:15.730152   25147 main.go:141] libmachine: (multinode-883509-m02) Creating domain...
	I1127 23:55:15.731145   25147 main.go:141] libmachine: (multinode-883509-m02) define libvirt domain using xml: 
	I1127 23:55:15.731162   25147 main.go:141] libmachine: (multinode-883509-m02) <domain type='kvm'>
	I1127 23:55:15.731170   25147 main.go:141] libmachine: (multinode-883509-m02)   <name>multinode-883509-m02</name>
	I1127 23:55:15.731176   25147 main.go:141] libmachine: (multinode-883509-m02)   <memory unit='MiB'>2200</memory>
	I1127 23:55:15.731182   25147 main.go:141] libmachine: (multinode-883509-m02)   <vcpu>2</vcpu>
	I1127 23:55:15.731187   25147 main.go:141] libmachine: (multinode-883509-m02)   <features>
	I1127 23:55:15.731193   25147 main.go:141] libmachine: (multinode-883509-m02)     <acpi/>
	I1127 23:55:15.731200   25147 main.go:141] libmachine: (multinode-883509-m02)     <apic/>
	I1127 23:55:15.731213   25147 main.go:141] libmachine: (multinode-883509-m02)     <pae/>
	I1127 23:55:15.731221   25147 main.go:141] libmachine: (multinode-883509-m02)     
	I1127 23:55:15.731228   25147 main.go:141] libmachine: (multinode-883509-m02)   </features>
	I1127 23:55:15.731236   25147 main.go:141] libmachine: (multinode-883509-m02)   <cpu mode='host-passthrough'>
	I1127 23:55:15.731272   25147 main.go:141] libmachine: (multinode-883509-m02)   
	I1127 23:55:15.731299   25147 main.go:141] libmachine: (multinode-883509-m02)   </cpu>
	I1127 23:55:15.731313   25147 main.go:141] libmachine: (multinode-883509-m02)   <os>
	I1127 23:55:15.731329   25147 main.go:141] libmachine: (multinode-883509-m02)     <type>hvm</type>
	I1127 23:55:15.731344   25147 main.go:141] libmachine: (multinode-883509-m02)     <boot dev='cdrom'/>
	I1127 23:55:15.731355   25147 main.go:141] libmachine: (multinode-883509-m02)     <boot dev='hd'/>
	I1127 23:55:15.731364   25147 main.go:141] libmachine: (multinode-883509-m02)     <bootmenu enable='no'/>
	I1127 23:55:15.731372   25147 main.go:141] libmachine: (multinode-883509-m02)   </os>
	I1127 23:55:15.731378   25147 main.go:141] libmachine: (multinode-883509-m02)   <devices>
	I1127 23:55:15.731390   25147 main.go:141] libmachine: (multinode-883509-m02)     <disk type='file' device='cdrom'>
	I1127 23:55:15.731410   25147 main.go:141] libmachine: (multinode-883509-m02)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/boot2docker.iso'/>
	I1127 23:55:15.731427   25147 main.go:141] libmachine: (multinode-883509-m02)       <target dev='hdc' bus='scsi'/>
	I1127 23:55:15.731446   25147 main.go:141] libmachine: (multinode-883509-m02)       <readonly/>
	I1127 23:55:15.731457   25147 main.go:141] libmachine: (multinode-883509-m02)     </disk>
	I1127 23:55:15.731467   25147 main.go:141] libmachine: (multinode-883509-m02)     <disk type='file' device='disk'>
	I1127 23:55:15.731477   25147 main.go:141] libmachine: (multinode-883509-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1127 23:55:15.731496   25147 main.go:141] libmachine: (multinode-883509-m02)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/multinode-883509-m02.rawdisk'/>
	I1127 23:55:15.731517   25147 main.go:141] libmachine: (multinode-883509-m02)       <target dev='hda' bus='virtio'/>
	I1127 23:55:15.731531   25147 main.go:141] libmachine: (multinode-883509-m02)     </disk>
	I1127 23:55:15.731543   25147 main.go:141] libmachine: (multinode-883509-m02)     <interface type='network'>
	I1127 23:55:15.731557   25147 main.go:141] libmachine: (multinode-883509-m02)       <source network='mk-multinode-883509'/>
	I1127 23:55:15.731573   25147 main.go:141] libmachine: (multinode-883509-m02)       <model type='virtio'/>
	I1127 23:55:15.731588   25147 main.go:141] libmachine: (multinode-883509-m02)     </interface>
	I1127 23:55:15.731604   25147 main.go:141] libmachine: (multinode-883509-m02)     <interface type='network'>
	I1127 23:55:15.731619   25147 main.go:141] libmachine: (multinode-883509-m02)       <source network='default'/>
	I1127 23:55:15.731631   25147 main.go:141] libmachine: (multinode-883509-m02)       <model type='virtio'/>
	I1127 23:55:15.731645   25147 main.go:141] libmachine: (multinode-883509-m02)     </interface>
	I1127 23:55:15.731657   25147 main.go:141] libmachine: (multinode-883509-m02)     <serial type='pty'>
	I1127 23:55:15.731676   25147 main.go:141] libmachine: (multinode-883509-m02)       <target port='0'/>
	I1127 23:55:15.731689   25147 main.go:141] libmachine: (multinode-883509-m02)     </serial>
	I1127 23:55:15.731702   25147 main.go:141] libmachine: (multinode-883509-m02)     <console type='pty'>
	I1127 23:55:15.731715   25147 main.go:141] libmachine: (multinode-883509-m02)       <target type='serial' port='0'/>
	I1127 23:55:15.731725   25147 main.go:141] libmachine: (multinode-883509-m02)     </console>
	I1127 23:55:15.731741   25147 main.go:141] libmachine: (multinode-883509-m02)     <rng model='virtio'>
	I1127 23:55:15.731756   25147 main.go:141] libmachine: (multinode-883509-m02)       <backend model='random'>/dev/random</backend>
	I1127 23:55:15.731768   25147 main.go:141] libmachine: (multinode-883509-m02)     </rng>
	I1127 23:55:15.731778   25147 main.go:141] libmachine: (multinode-883509-m02)     
	I1127 23:55:15.731792   25147 main.go:141] libmachine: (multinode-883509-m02)     
	I1127 23:55:15.731805   25147 main.go:141] libmachine: (multinode-883509-m02)   </devices>
	I1127 23:55:15.731820   25147 main.go:141] libmachine: (multinode-883509-m02) </domain>
	I1127 23:55:15.731837   25147 main.go:141] libmachine: (multinode-883509-m02) 
	I1127 23:55:15.738683   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:a6:cd:24 in network default
	I1127 23:55:15.739317   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:15.739338   25147 main.go:141] libmachine: (multinode-883509-m02) Ensuring networks are active...
	I1127 23:55:15.740204   25147 main.go:141] libmachine: (multinode-883509-m02) Ensuring network default is active
	I1127 23:55:15.740672   25147 main.go:141] libmachine: (multinode-883509-m02) Ensuring network mk-multinode-883509 is active
	I1127 23:55:15.741220   25147 main.go:141] libmachine: (multinode-883509-m02) Getting domain xml...
	I1127 23:55:15.741977   25147 main.go:141] libmachine: (multinode-883509-m02) Creating domain...
	I1127 23:55:16.986356   25147 main.go:141] libmachine: (multinode-883509-m02) Waiting to get IP...
	I1127 23:55:16.987090   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:16.987502   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:16.987525   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:16.987473   25500 retry.go:31] will retry after 228.537489ms: waiting for machine to come up
	I1127 23:55:17.218057   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:17.218597   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:17.218628   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:17.218543   25500 retry.go:31] will retry after 298.780688ms: waiting for machine to come up
	I1127 23:55:17.519078   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:17.519585   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:17.520248   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:17.519536   25500 retry.go:31] will retry after 310.482582ms: waiting for machine to come up
	I1127 23:55:17.831937   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:17.832407   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:17.832446   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:17.832351   25500 retry.go:31] will retry after 536.489554ms: waiting for machine to come up
	I1127 23:55:18.370957   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:18.371338   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:18.371362   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:18.371299   25500 retry.go:31] will retry after 687.321673ms: waiting for machine to come up
	I1127 23:55:19.060169   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:19.060769   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:19.060798   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:19.060713   25500 retry.go:31] will retry after 819.692457ms: waiting for machine to come up
	I1127 23:55:19.881766   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:19.882277   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:19.882311   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:19.882220   25500 retry.go:31] will retry after 759.28428ms: waiting for machine to come up
	I1127 23:55:20.643682   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:20.644175   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:20.644198   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:20.644130   25500 retry.go:31] will retry after 1.076055897s: waiting for machine to come up
	I1127 23:55:21.721768   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:21.722128   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:21.722155   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:21.722081   25500 retry.go:31] will retry after 1.204098758s: waiting for machine to come up
	I1127 23:55:22.927312   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:22.927688   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:22.927706   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:22.927633   25500 retry.go:31] will retry after 2.124827345s: waiting for machine to come up
	I1127 23:55:25.054822   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:25.055328   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:25.055363   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:25.055280   25500 retry.go:31] will retry after 2.894774559s: waiting for machine to come up
	I1127 23:55:27.953467   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:27.953817   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:27.953848   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:27.953771   25500 retry.go:31] will retry after 3.135815373s: waiting for machine to come up
	I1127 23:55:31.090686   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:31.091138   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:31.091167   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:31.091094   25500 retry.go:31] will retry after 2.886252503s: waiting for machine to come up
	I1127 23:55:33.981247   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:33.981708   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find current IP address of domain multinode-883509-m02 in network mk-multinode-883509
	I1127 23:55:33.981736   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | I1127 23:55:33.981641   25500 retry.go:31] will retry after 5.183465421s: waiting for machine to come up
	I1127 23:55:39.169937   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.170400   25147 main.go:141] libmachine: (multinode-883509-m02) Found IP for machine: 192.168.39.97
	I1127 23:55:39.170426   25147 main.go:141] libmachine: (multinode-883509-m02) Reserving static IP address...
	I1127 23:55:39.170437   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has current primary IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.170878   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | unable to find host DHCP lease matching {name: "multinode-883509-m02", mac: "52:54:00:10:23:98", ip: "192.168.39.97"} in network mk-multinode-883509
	I1127 23:55:39.246069   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Getting to WaitForSSH function...
	I1127 23:55:39.246097   25147 main.go:141] libmachine: (multinode-883509-m02) Reserved static IP address: 192.168.39.97
	I1127 23:55:39.246109   25147 main.go:141] libmachine: (multinode-883509-m02) Waiting for SSH to be available...
	I1127 23:55:39.248632   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.249120   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:10:23:98}
	I1127 23:55:39.249149   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.249288   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Using SSH client type: external
	I1127 23:55:39.249317   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa (-rw-------)
	I1127 23:55:39.249353   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1127 23:55:39.249368   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | About to run SSH command:
	I1127 23:55:39.249386   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | exit 0
	I1127 23:55:39.336596   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | SSH cmd err, output: <nil>: 
	I1127 23:55:39.336872   25147 main.go:141] libmachine: (multinode-883509-m02) KVM machine creation complete!
	I1127 23:55:39.337207   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetConfigRaw
	I1127 23:55:39.337791   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:55:39.337970   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:55:39.338089   25147 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1127 23:55:39.338106   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetState
	I1127 23:55:39.339298   25147 main.go:141] libmachine: Detecting operating system of created instance...
	I1127 23:55:39.339316   25147 main.go:141] libmachine: Waiting for SSH to be available...
	I1127 23:55:39.339325   25147 main.go:141] libmachine: Getting to WaitForSSH function...
	I1127 23:55:39.339332   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:39.341649   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.342033   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:39.342064   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.342171   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:39.342334   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:39.342465   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:39.342565   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:39.342712   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:39.343054   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1127 23:55:39.343066   25147 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1127 23:55:39.455815   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:55:39.455847   25147 main.go:141] libmachine: Detecting the provisioner...
	I1127 23:55:39.455859   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:39.458555   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.458881   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:39.458912   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.459101   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:39.459291   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:39.459414   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:39.459554   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:39.459714   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:39.460070   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1127 23:55:39.460081   25147 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1127 23:55:39.573412   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g8be4f20-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1127 23:55:39.573487   25147 main.go:141] libmachine: found compatible host: buildroot
	I1127 23:55:39.573501   25147 main.go:141] libmachine: Provisioning with buildroot...
	I1127 23:55:39.573515   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetMachineName
	I1127 23:55:39.573757   25147 buildroot.go:166] provisioning hostname "multinode-883509-m02"
	I1127 23:55:39.573780   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetMachineName
	I1127 23:55:39.573951   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:39.576195   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.576503   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:39.576546   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.576686   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:39.576888   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:39.577024   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:39.577164   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:39.577308   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:39.577602   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1127 23:55:39.577621   25147 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-883509-m02 && echo "multinode-883509-m02" | sudo tee /etc/hostname
	I1127 23:55:39.702287   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-883509-m02
	
	I1127 23:55:39.702317   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:39.704915   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.705274   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:39.705308   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.705469   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:39.705652   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:39.705789   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:39.705903   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:39.706054   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:39.706360   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1127 23:55:39.706380   25147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-883509-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-883509-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-883509-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:55:39.825327   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:55:39.825357   25147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1127 23:55:39.825375   25147 buildroot.go:174] setting up certificates
	I1127 23:55:39.825386   25147 provision.go:83] configureAuth start
	I1127 23:55:39.825394   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetMachineName
	I1127 23:55:39.825687   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetIP
	I1127 23:55:39.827970   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.828334   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:39.828365   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.828536   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:39.830557   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.830822   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:39.830849   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:39.830963   25147 provision.go:138] copyHostCerts
	I1127 23:55:39.830992   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1127 23:55:39.831028   25147 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1127 23:55:39.831041   25147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1127 23:55:39.831113   25147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1127 23:55:39.831194   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1127 23:55:39.831211   25147 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1127 23:55:39.831217   25147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1127 23:55:39.831242   25147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1127 23:55:39.831284   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1127 23:55:39.831299   25147 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1127 23:55:39.831305   25147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1127 23:55:39.831325   25147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1127 23:55:39.831365   25147 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.multinode-883509-m02 san=[192.168.39.97 192.168.39.97 localhost 127.0.0.1 minikube multinode-883509-m02]
	I1127 23:55:40.025342   25147 provision.go:172] copyRemoteCerts
	I1127 23:55:40.025395   25147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:55:40.025422   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:40.028037   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.028415   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.028447   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.028584   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:40.028749   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:40.028911   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:40.029009   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1127 23:55:40.112950   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:55:40.113021   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1127 23:55:40.135918   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:55:40.135989   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:55:40.158048   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:55:40.158114   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:55:40.180160   25147 provision.go:86] duration metric: configureAuth took 354.760559ms
	I1127 23:55:40.180188   25147 buildroot.go:189] setting minikube options for container-runtime
	I1127 23:55:40.180343   25147 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:55:40.180430   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:40.182835   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.183161   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.183192   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.183448   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:40.183616   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:40.183776   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:40.183891   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:40.184026   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:40.184339   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1127 23:55:40.184360   25147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1127 23:55:40.500550   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1127 23:55:40.500579   25147 main.go:141] libmachine: Checking connection to Docker...
	I1127 23:55:40.500591   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetURL
	I1127 23:55:40.501752   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | Using libvirt version 6000000
	I1127 23:55:40.503950   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.504311   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.504332   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.504521   25147 main.go:141] libmachine: Docker is up and running!
	I1127 23:55:40.504540   25147 main.go:141] libmachine: Reticulating splines...
	I1127 23:55:40.504548   25147 client.go:171] LocalClient.Create took 25.127724666s
	I1127 23:55:40.504572   25147 start.go:167] duration metric: libmachine.API.Create for "multinode-883509" took 25.127784161s
	I1127 23:55:40.504585   25147 start.go:300] post-start starting for "multinode-883509-m02" (driver="kvm2")
	I1127 23:55:40.504601   25147 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:55:40.504623   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:55:40.504857   25147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:55:40.504905   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:40.507036   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.507382   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.507406   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.507528   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:40.507672   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:40.507793   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:40.507905   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1127 23:55:40.594900   25147 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:55:40.598926   25147 command_runner.go:130] > NAME=Buildroot
	I1127 23:55:40.598945   25147 command_runner.go:130] > VERSION=2021.02.12-1-g8be4f20-dirty
	I1127 23:55:40.598960   25147 command_runner.go:130] > ID=buildroot
	I1127 23:55:40.598973   25147 command_runner.go:130] > VERSION_ID=2021.02.12
	I1127 23:55:40.598981   25147 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1127 23:55:40.599122   25147 info.go:137] Remote host: Buildroot 2021.02.12
	I1127 23:55:40.599146   25147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1127 23:55:40.599213   25147 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1127 23:55:40.599327   25147 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1127 23:55:40.599341   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /etc/ssl/certs/119302.pem
	I1127 23:55:40.599443   25147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:55:40.608283   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1127 23:55:40.629576   25147 start.go:303] post-start completed in 124.975958ms
	I1127 23:55:40.629617   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetConfigRaw
	I1127 23:55:40.630158   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetIP
	I1127 23:55:40.632488   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.632836   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.632865   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.633071   25147 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1127 23:55:40.633227   25147 start.go:128] duration metric: createHost completed in 25.273937899s
	I1127 23:55:40.633253   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:40.635404   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.635759   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.635784   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.635891   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:40.636039   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:40.636179   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:40.636364   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:40.636508   25147 main.go:141] libmachine: Using SSH client type: native
	I1127 23:55:40.636965   25147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1127 23:55:40.636982   25147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1127 23:55:40.749474   25147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701129340.731900876
	
	I1127 23:55:40.749497   25147 fix.go:206] guest clock: 1701129340.731900876
	I1127 23:55:40.749506   25147 fix.go:219] Guest: 2023-11-27 23:55:40.731900876 +0000 UTC Remote: 2023-11-27 23:55:40.633239303 +0000 UTC m=+91.743089074 (delta=98.661573ms)
	I1127 23:55:40.749525   25147 fix.go:190] guest clock delta is within tolerance: 98.661573ms
	I1127 23:55:40.749531   25147 start.go:83] releasing machines lock for "multinode-883509-m02", held for 25.390345035s
	I1127 23:55:40.749561   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:55:40.749825   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetIP
	I1127 23:55:40.752258   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.752650   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.752672   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.755025   25147 out.go:177] * Found network options:
	I1127 23:55:40.756589   25147 out.go:177]   - NO_PROXY=192.168.39.159
	W1127 23:55:40.757865   25147 proxy.go:119] fail to check proxy env: Error ip not in block
	I1127 23:55:40.757900   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:55:40.758328   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:55:40.758506   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:55:40.758581   25147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:55:40.758617   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	W1127 23:55:40.758656   25147 proxy.go:119] fail to check proxy env: Error ip not in block
	I1127 23:55:40.758715   25147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1127 23:55:40.758732   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:55:40.761197   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.761402   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.761573   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.761605   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.761750   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:40.761766   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:40.761793   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:40.761918   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:55:40.761918   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:40.762061   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:55:40.762111   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:40.762234   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:55:40.762249   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1127 23:55:40.762358   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1127 23:55:40.995677   25147 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1127 23:55:40.995703   25147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:55:41.001533   25147 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1127 23:55:41.001793   25147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1127 23:55:41.001857   25147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:55:41.017419   25147 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1127 23:55:41.017449   25147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 23:55:41.017458   25147 start.go:472] detecting cgroup driver to use...
	I1127 23:55:41.017511   25147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 23:55:41.031860   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 23:55:41.045122   25147 docker.go:203] disabling cri-docker service (if available) ...
	I1127 23:55:41.045174   25147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1127 23:55:41.058197   25147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1127 23:55:41.071798   25147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1127 23:55:41.087122   25147 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1127 23:55:41.198372   25147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1127 23:55:41.212348   25147 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1127 23:55:41.325297   25147 docker.go:219] disabling docker service ...
	I1127 23:55:41.325365   25147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1127 23:55:41.338503   25147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1127 23:55:41.352207   25147 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1127 23:55:41.352281   25147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1127 23:55:41.455597   25147 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1127 23:55:41.455674   25147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1127 23:55:41.563051   25147 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1127 23:55:41.563074   25147 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1127 23:55:41.563132   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1127 23:55:41.576157   25147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:55:41.592557   25147 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1127 23:55:41.593019   25147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1127 23:55:41.593075   25147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:55:41.601646   25147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1127 23:55:41.601686   25147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:55:41.610317   25147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:55:41.619049   25147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1127 23:55:41.627622   25147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:55:41.636635   25147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:55:41.644304   25147 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1127 23:55:41.644476   25147 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1127 23:55:41.644527   25147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1127 23:55:41.656784   25147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:55:41.664349   25147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:55:41.784895   25147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1127 23:55:41.964345   25147 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1127 23:55:41.964411   25147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1127 23:55:41.968958   25147 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1127 23:55:41.968981   25147 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1127 23:55:41.968991   25147 command_runner.go:130] > Device: 16h/22d	Inode: 717         Links: 1
	I1127 23:55:41.969001   25147 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:55:41.969009   25147 command_runner.go:130] > Access: 2023-11-27 23:55:41.936037517 +0000
	I1127 23:55:41.969018   25147 command_runner.go:130] > Modify: 2023-11-27 23:55:41.936037517 +0000
	I1127 23:55:41.969027   25147 command_runner.go:130] > Change: 2023-11-27 23:55:41.936037517 +0000
	I1127 23:55:41.969039   25147 command_runner.go:130] >  Birth: -
	I1127 23:55:41.969079   25147 start.go:540] Will wait 60s for crictl version
	I1127 23:55:41.969128   25147 ssh_runner.go:195] Run: which crictl
	I1127 23:55:41.972679   25147 command_runner.go:130] > /usr/bin/crictl
	I1127 23:55:41.972721   25147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:55:42.014876   25147 command_runner.go:130] > Version:  0.1.0
	I1127 23:55:42.014902   25147 command_runner.go:130] > RuntimeName:  cri-o
	I1127 23:55:42.014910   25147 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1127 23:55:42.014918   25147 command_runner.go:130] > RuntimeApiVersion:  v1
	I1127 23:55:42.014938   25147 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1127 23:55:42.015001   25147 ssh_runner.go:195] Run: crio --version
	I1127 23:55:42.068083   25147 command_runner.go:130] > crio version 1.24.1
	I1127 23:55:42.068112   25147 command_runner.go:130] > Version:          1.24.1
	I1127 23:55:42.068121   25147 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1127 23:55:42.068127   25147 command_runner.go:130] > GitTreeState:     dirty
	I1127 23:55:42.068136   25147 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1127 23:55:42.068143   25147 command_runner.go:130] > GoVersion:        go1.19.9
	I1127 23:55:42.068150   25147 command_runner.go:130] > Compiler:         gc
	I1127 23:55:42.068161   25147 command_runner.go:130] > Platform:         linux/amd64
	I1127 23:55:42.068177   25147 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:55:42.068192   25147 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:55:42.068202   25147 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:55:42.068214   25147 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:55:42.069419   25147 ssh_runner.go:195] Run: crio --version
	I1127 23:55:42.111736   25147 command_runner.go:130] > crio version 1.24.1
	I1127 23:55:42.111762   25147 command_runner.go:130] > Version:          1.24.1
	I1127 23:55:42.111772   25147 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1127 23:55:42.111778   25147 command_runner.go:130] > GitTreeState:     dirty
	I1127 23:55:42.111786   25147 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1127 23:55:42.111792   25147 command_runner.go:130] > GoVersion:        go1.19.9
	I1127 23:55:42.111798   25147 command_runner.go:130] > Compiler:         gc
	I1127 23:55:42.111805   25147 command_runner.go:130] > Platform:         linux/amd64
	I1127 23:55:42.111812   25147 command_runner.go:130] > Linkmode:         dynamic
	I1127 23:55:42.111823   25147 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1127 23:55:42.111831   25147 command_runner.go:130] > SeccompEnabled:   true
	I1127 23:55:42.111843   25147 command_runner.go:130] > AppArmorEnabled:  false
	I1127 23:55:42.115929   25147 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1127 23:55:42.117349   25147 out.go:177]   - env NO_PROXY=192.168.39.159
	I1127 23:55:42.118591   25147 main.go:141] libmachine: (multinode-883509-m02) Calling .GetIP
	I1127 23:55:42.121381   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:42.121849   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:55:42.121870   25147 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:55:42.122139   25147 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1127 23:55:42.126170   25147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:55:42.138468   25147 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509 for IP: 192.168.39.97
	I1127 23:55:42.138490   25147 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:55:42.138621   25147 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1127 23:55:42.138664   25147 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1127 23:55:42.138677   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:55:42.138691   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:55:42.138703   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:55:42.138716   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:55:42.138760   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1127 23:55:42.138787   25147 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1127 23:55:42.138797   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:55:42.138821   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:55:42.138843   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:55:42.138865   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1127 23:55:42.138902   25147 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1127 23:55:42.138926   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:42.138938   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem -> /usr/share/ca-certificates/11930.pem
	I1127 23:55:42.138950   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /usr/share/ca-certificates/119302.pem
	I1127 23:55:42.139235   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:55:42.161861   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1127 23:55:42.183547   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:55:42.205127   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:55:42.227236   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:55:42.249288   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1127 23:55:42.271951   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1127 23:55:42.294129   25147 ssh_runner.go:195] Run: openssl version
	I1127 23:55:42.299404   25147 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1127 23:55:42.299617   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:55:42.308795   25147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:42.312909   25147 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:42.312981   25147 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:42.313020   25147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:55:42.317999   25147 command_runner.go:130] > b5213941
	I1127 23:55:42.318258   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:55:42.327396   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1127 23:55:42.336435   25147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1127 23:55:42.340939   25147 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1127 23:55:42.340974   25147 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1127 23:55:42.341011   25147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1127 23:55:42.346364   25147 command_runner.go:130] > 51391683
	I1127 23:55:42.346415   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1127 23:55:42.357325   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1127 23:55:42.366933   25147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1127 23:55:42.371064   25147 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1127 23:55:42.371292   25147 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1127 23:55:42.371343   25147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1127 23:55:42.376552   25147 command_runner.go:130] > 3ec20f2e
	I1127 23:55:42.376859   25147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:55:42.386657   25147 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:55:42.390641   25147 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:55:42.390792   25147 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:55:42.390875   25147 ssh_runner.go:195] Run: crio config
	I1127 23:55:42.437769   25147 command_runner.go:130] ! time="2023-11-27 23:55:42.423627736Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1127 23:55:42.437861   25147 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1127 23:55:42.445175   25147 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1127 23:55:42.445195   25147 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1127 23:55:42.445207   25147 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1127 23:55:42.445213   25147 command_runner.go:130] > #
	I1127 23:55:42.445227   25147 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1127 23:55:42.445234   25147 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1127 23:55:42.445240   25147 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1127 23:55:42.445247   25147 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1127 23:55:42.445251   25147 command_runner.go:130] > # reload'.
	I1127 23:55:42.445257   25147 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1127 23:55:42.445263   25147 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1127 23:55:42.445270   25147 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1127 23:55:42.445276   25147 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1127 23:55:42.445283   25147 command_runner.go:130] > [crio]
	I1127 23:55:42.445293   25147 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1127 23:55:42.445307   25147 command_runner.go:130] > # containers images, in this directory.
	I1127 23:55:42.445315   25147 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1127 23:55:42.445325   25147 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1127 23:55:42.445329   25147 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1127 23:55:42.445335   25147 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1127 23:55:42.445346   25147 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1127 23:55:42.445352   25147 command_runner.go:130] > storage_driver = "overlay"
	I1127 23:55:42.445358   25147 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1127 23:55:42.445371   25147 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1127 23:55:42.445382   25147 command_runner.go:130] > storage_option = [
	I1127 23:55:42.445409   25147 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1127 23:55:42.445426   25147 command_runner.go:130] > ]
	I1127 23:55:42.445434   25147 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1127 23:55:42.445440   25147 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1127 23:55:42.445447   25147 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1127 23:55:42.445456   25147 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1127 23:55:42.445470   25147 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1127 23:55:42.445481   25147 command_runner.go:130] > # always happen on a node reboot
	I1127 23:55:42.445492   25147 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1127 23:55:42.445504   25147 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1127 23:55:42.445517   25147 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1127 23:55:42.445529   25147 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1127 23:55:42.445539   25147 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1127 23:55:42.445555   25147 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1127 23:55:42.445571   25147 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1127 23:55:42.445581   25147 command_runner.go:130] > # internal_wipe = true
	I1127 23:55:42.445593   25147 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1127 23:55:42.445606   25147 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1127 23:55:42.445615   25147 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1127 23:55:42.445622   25147 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1127 23:55:42.445636   25147 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1127 23:55:42.445646   25147 command_runner.go:130] > [crio.api]
	I1127 23:55:42.445657   25147 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1127 23:55:42.445668   25147 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1127 23:55:42.445679   25147 command_runner.go:130] > # IP address on which the stream server will listen.
	I1127 23:55:42.445690   25147 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1127 23:55:42.445704   25147 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1127 23:55:42.445715   25147 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1127 23:55:42.445726   25147 command_runner.go:130] > # stream_port = "0"
	I1127 23:55:42.445738   25147 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1127 23:55:42.445748   25147 command_runner.go:130] > # stream_enable_tls = false
	I1127 23:55:42.445760   25147 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1127 23:55:42.445770   25147 command_runner.go:130] > # stream_idle_timeout = ""
	I1127 23:55:42.445782   25147 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1127 23:55:42.445792   25147 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1127 23:55:42.445802   25147 command_runner.go:130] > # minutes.
	I1127 23:55:42.445813   25147 command_runner.go:130] > # stream_tls_cert = ""
	I1127 23:55:42.445827   25147 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1127 23:55:42.445840   25147 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1127 23:55:42.445850   25147 command_runner.go:130] > # stream_tls_key = ""
	I1127 23:55:42.445858   25147 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1127 23:55:42.445871   25147 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1127 23:55:42.445883   25147 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1127 23:55:42.445892   25147 command_runner.go:130] > # stream_tls_ca = ""
	I1127 23:55:42.445907   25147 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:55:42.445918   25147 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1127 23:55:42.445931   25147 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1127 23:55:42.445941   25147 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1127 23:55:42.445967   25147 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1127 23:55:42.445978   25147 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1127 23:55:42.445987   25147 command_runner.go:130] > [crio.runtime]
	I1127 23:55:42.446001   25147 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1127 23:55:42.446013   25147 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1127 23:55:42.446023   25147 command_runner.go:130] > # "nofile=1024:2048"
	I1127 23:55:42.446036   25147 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1127 23:55:42.446046   25147 command_runner.go:130] > # default_ulimits = [
	I1127 23:55:42.446053   25147 command_runner.go:130] > # ]
	I1127 23:55:42.446060   25147 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1127 23:55:42.446066   25147 command_runner.go:130] > # no_pivot = false
	I1127 23:55:42.446072   25147 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1127 23:55:42.446080   25147 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1127 23:55:42.446087   25147 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1127 23:55:42.446093   25147 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1127 23:55:42.446100   25147 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1127 23:55:42.446107   25147 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:55:42.446114   25147 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1127 23:55:42.446118   25147 command_runner.go:130] > # Cgroup setting for conmon
	I1127 23:55:42.446128   25147 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1127 23:55:42.446134   25147 command_runner.go:130] > conmon_cgroup = "pod"
	I1127 23:55:42.446140   25147 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1127 23:55:42.446148   25147 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1127 23:55:42.446154   25147 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1127 23:55:42.446160   25147 command_runner.go:130] > conmon_env = [
	I1127 23:55:42.446167   25147 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1127 23:55:42.446172   25147 command_runner.go:130] > ]
	I1127 23:55:42.446179   25147 command_runner.go:130] > # Additional environment variables to set for all the
	I1127 23:55:42.446187   25147 command_runner.go:130] > # containers. These are overridden if set in the
	I1127 23:55:42.446195   25147 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1127 23:55:42.446200   25147 command_runner.go:130] > # default_env = [
	I1127 23:55:42.446205   25147 command_runner.go:130] > # ]
	I1127 23:55:42.446211   25147 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1127 23:55:42.446217   25147 command_runner.go:130] > # selinux = false
	I1127 23:55:42.446223   25147 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1127 23:55:42.446232   25147 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1127 23:55:42.446238   25147 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1127 23:55:42.446244   25147 command_runner.go:130] > # seccomp_profile = ""
	I1127 23:55:42.446250   25147 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1127 23:55:42.446258   25147 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1127 23:55:42.446264   25147 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1127 23:55:42.446271   25147 command_runner.go:130] > # which might increase security.
	I1127 23:55:42.446275   25147 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1127 23:55:42.446284   25147 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1127 23:55:42.446292   25147 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1127 23:55:42.446300   25147 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1127 23:55:42.446308   25147 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1127 23:55:42.446314   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:42.446321   25147 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1127 23:55:42.446326   25147 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1127 23:55:42.446333   25147 command_runner.go:130] > # the cgroup blockio controller.
	I1127 23:55:42.446337   25147 command_runner.go:130] > # blockio_config_file = ""
	I1127 23:55:42.446345   25147 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1127 23:55:42.446351   25147 command_runner.go:130] > # irqbalance daemon.
	I1127 23:55:42.446356   25147 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1127 23:55:42.446366   25147 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1127 23:55:42.446373   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:42.446380   25147 command_runner.go:130] > # rdt_config_file = ""
	I1127 23:55:42.446385   25147 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1127 23:55:42.446396   25147 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1127 23:55:42.446404   25147 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1127 23:55:42.446410   25147 command_runner.go:130] > # separate_pull_cgroup = ""
	I1127 23:55:42.446417   25147 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1127 23:55:42.446424   25147 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1127 23:55:42.446428   25147 command_runner.go:130] > # will be added.
	I1127 23:55:42.446435   25147 command_runner.go:130] > # default_capabilities = [
	I1127 23:55:42.446439   25147 command_runner.go:130] > # 	"CHOWN",
	I1127 23:55:42.446445   25147 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1127 23:55:42.446449   25147 command_runner.go:130] > # 	"FSETID",
	I1127 23:55:42.446455   25147 command_runner.go:130] > # 	"FOWNER",
	I1127 23:55:42.446459   25147 command_runner.go:130] > # 	"SETGID",
	I1127 23:55:42.446464   25147 command_runner.go:130] > # 	"SETUID",
	I1127 23:55:42.446468   25147 command_runner.go:130] > # 	"SETPCAP",
	I1127 23:55:42.446475   25147 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1127 23:55:42.446478   25147 command_runner.go:130] > # 	"KILL",
	I1127 23:55:42.446484   25147 command_runner.go:130] > # ]
	I1127 23:55:42.446491   25147 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1127 23:55:42.446498   25147 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:55:42.446505   25147 command_runner.go:130] > # default_sysctls = [
	I1127 23:55:42.446508   25147 command_runner.go:130] > # ]
	I1127 23:55:42.446513   25147 command_runner.go:130] > # List of devices on the host that a
	I1127 23:55:42.446522   25147 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1127 23:55:42.446529   25147 command_runner.go:130] > # allowed_devices = [
	I1127 23:55:42.446533   25147 command_runner.go:130] > # 	"/dev/fuse",
	I1127 23:55:42.446538   25147 command_runner.go:130] > # ]
	I1127 23:55:42.446544   25147 command_runner.go:130] > # List of additional devices. specified as
	I1127 23:55:42.446552   25147 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1127 23:55:42.446560   25147 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1127 23:55:42.446575   25147 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1127 23:55:42.446582   25147 command_runner.go:130] > # additional_devices = [
	I1127 23:55:42.446585   25147 command_runner.go:130] > # ]
	I1127 23:55:42.446591   25147 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1127 23:55:42.446597   25147 command_runner.go:130] > # cdi_spec_dirs = [
	I1127 23:55:42.446601   25147 command_runner.go:130] > # 	"/etc/cdi",
	I1127 23:55:42.446605   25147 command_runner.go:130] > # 	"/var/run/cdi",
	I1127 23:55:42.446611   25147 command_runner.go:130] > # ]
	I1127 23:55:42.446619   25147 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1127 23:55:42.446627   25147 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1127 23:55:42.446633   25147 command_runner.go:130] > # Defaults to false.
	I1127 23:55:42.446638   25147 command_runner.go:130] > # device_ownership_from_security_context = false
	I1127 23:55:42.446646   25147 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1127 23:55:42.446654   25147 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1127 23:55:42.446660   25147 command_runner.go:130] > # hooks_dir = [
	I1127 23:55:42.446665   25147 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1127 23:55:42.446670   25147 command_runner.go:130] > # ]
	I1127 23:55:42.446677   25147 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1127 23:55:42.446685   25147 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1127 23:55:42.446693   25147 command_runner.go:130] > # its default mounts from the following two files:
	I1127 23:55:42.446696   25147 command_runner.go:130] > #
	I1127 23:55:42.446705   25147 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1127 23:55:42.446712   25147 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1127 23:55:42.446720   25147 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1127 23:55:42.446725   25147 command_runner.go:130] > #
	I1127 23:55:42.446731   25147 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1127 23:55:42.446740   25147 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1127 23:55:42.446749   25147 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1127 23:55:42.446756   25147 command_runner.go:130] > #      only add mounts it finds in this file.
	I1127 23:55:42.446759   25147 command_runner.go:130] > #
	I1127 23:55:42.446766   25147 command_runner.go:130] > # default_mounts_file = ""
	I1127 23:55:42.446771   25147 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1127 23:55:42.446779   25147 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1127 23:55:42.446786   25147 command_runner.go:130] > pids_limit = 1024
	I1127 23:55:42.446792   25147 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1127 23:55:42.446802   25147 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1127 23:55:42.446808   25147 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1127 23:55:42.446817   25147 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1127 23:55:42.446823   25147 command_runner.go:130] > # log_size_max = -1
	I1127 23:55:42.446835   25147 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1127 23:55:42.446842   25147 command_runner.go:130] > # log_to_journald = false
	I1127 23:55:42.446851   25147 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1127 23:55:42.446862   25147 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1127 23:55:42.446874   25147 command_runner.go:130] > # Path to directory for container attach sockets.
	I1127 23:55:42.446886   25147 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1127 23:55:42.446899   25147 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1127 23:55:42.446909   25147 command_runner.go:130] > # bind_mount_prefix = ""
	I1127 23:55:42.446917   25147 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1127 23:55:42.446927   25147 command_runner.go:130] > # read_only = false
	I1127 23:55:42.446940   25147 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1127 23:55:42.446953   25147 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1127 23:55:42.446963   25147 command_runner.go:130] > # live configuration reload.
	I1127 23:55:42.446971   25147 command_runner.go:130] > # log_level = "info"
	I1127 23:55:42.446982   25147 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1127 23:55:42.446991   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:42.446996   25147 command_runner.go:130] > # log_filter = ""
	I1127 23:55:42.447006   25147 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1127 23:55:42.447014   25147 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1127 23:55:42.447019   25147 command_runner.go:130] > # separated by comma.
	I1127 23:55:42.447023   25147 command_runner.go:130] > # uid_mappings = ""
	I1127 23:55:42.447029   25147 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1127 23:55:42.447037   25147 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1127 23:55:42.447042   25147 command_runner.go:130] > # separated by comma.
	I1127 23:55:42.447046   25147 command_runner.go:130] > # gid_mappings = ""
	I1127 23:55:42.447055   25147 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1127 23:55:42.447063   25147 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:55:42.447071   25147 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:55:42.447077   25147 command_runner.go:130] > # minimum_mappable_uid = -1
	I1127 23:55:42.447083   25147 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1127 23:55:42.447091   25147 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1127 23:55:42.447099   25147 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1127 23:55:42.447106   25147 command_runner.go:130] > # minimum_mappable_gid = -1
	I1127 23:55:42.447112   25147 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1127 23:55:42.447120   25147 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1127 23:55:42.447127   25147 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1127 23:55:42.447134   25147 command_runner.go:130] > # ctr_stop_timeout = 30
	I1127 23:55:42.447143   25147 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1127 23:55:42.447149   25147 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1127 23:55:42.447156   25147 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1127 23:55:42.447161   25147 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1127 23:55:42.447170   25147 command_runner.go:130] > drop_infra_ctr = false
	I1127 23:55:42.447178   25147 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1127 23:55:42.447186   25147 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1127 23:55:42.447195   25147 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1127 23:55:42.447201   25147 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1127 23:55:42.447207   25147 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1127 23:55:42.447214   25147 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1127 23:55:42.447219   25147 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1127 23:55:42.447227   25147 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1127 23:55:42.447234   25147 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1127 23:55:42.447240   25147 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1127 23:55:42.447248   25147 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1127 23:55:42.447255   25147 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1127 23:55:42.447261   25147 command_runner.go:130] > # default_runtime = "runc"
	I1127 23:55:42.447273   25147 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1127 23:55:42.447284   25147 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1127 23:55:42.447295   25147 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1127 23:55:42.447302   25147 command_runner.go:130] > # creation as a file is not desired either.
	I1127 23:55:42.447312   25147 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1127 23:55:42.447319   25147 command_runner.go:130] > # the hostname is being managed dynamically.
	I1127 23:55:42.447326   25147 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1127 23:55:42.447331   25147 command_runner.go:130] > # ]
	I1127 23:55:42.447339   25147 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1127 23:55:42.447347   25147 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1127 23:55:42.447356   25147 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1127 23:55:42.447364   25147 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1127 23:55:42.447370   25147 command_runner.go:130] > #
	I1127 23:55:42.447375   25147 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1127 23:55:42.447382   25147 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1127 23:55:42.447388   25147 command_runner.go:130] > #  runtime_type = "oci"
	I1127 23:55:42.447399   25147 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1127 23:55:42.447404   25147 command_runner.go:130] > #  privileged_without_host_devices = false
	I1127 23:55:42.447411   25147 command_runner.go:130] > #  allowed_annotations = []
	I1127 23:55:42.447415   25147 command_runner.go:130] > # Where:
	I1127 23:55:42.447421   25147 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1127 23:55:42.447427   25147 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1127 23:55:42.447436   25147 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1127 23:55:42.447444   25147 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1127 23:55:42.447450   25147 command_runner.go:130] > #   in $PATH.
	I1127 23:55:42.447456   25147 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1127 23:55:42.447464   25147 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1127 23:55:42.447472   25147 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1127 23:55:42.447478   25147 command_runner.go:130] > #   state.
	I1127 23:55:42.447485   25147 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1127 23:55:42.447492   25147 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1127 23:55:42.447499   25147 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1127 23:55:42.447506   25147 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1127 23:55:42.447512   25147 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1127 23:55:42.447520   25147 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1127 23:55:42.447529   25147 command_runner.go:130] > #   The currently recognized values are:
	I1127 23:55:42.447536   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1127 23:55:42.447545   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1127 23:55:42.447553   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1127 23:55:42.447561   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1127 23:55:42.447570   25147 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1127 23:55:42.447578   25147 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1127 23:55:42.447587   25147 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1127 23:55:42.447596   25147 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1127 23:55:42.447601   25147 command_runner.go:130] > #   should be moved to the container's cgroup
	I1127 23:55:42.447607   25147 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1127 23:55:42.447612   25147 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1127 23:55:42.447618   25147 command_runner.go:130] > runtime_type = "oci"
	I1127 23:55:42.447623   25147 command_runner.go:130] > runtime_root = "/run/runc"
	I1127 23:55:42.447629   25147 command_runner.go:130] > runtime_config_path = ""
	I1127 23:55:42.447633   25147 command_runner.go:130] > monitor_path = ""
	I1127 23:55:42.447639   25147 command_runner.go:130] > monitor_cgroup = ""
	I1127 23:55:42.447643   25147 command_runner.go:130] > monitor_exec_cgroup = ""
	I1127 23:55:42.447652   25147 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1127 23:55:42.447658   25147 command_runner.go:130] > # running containers
	I1127 23:55:42.447663   25147 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1127 23:55:42.447671   25147 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1127 23:55:42.447695   25147 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1127 23:55:42.447703   25147 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1127 23:55:42.447708   25147 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1127 23:55:42.447715   25147 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1127 23:55:42.447719   25147 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1127 23:55:42.447726   25147 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1127 23:55:42.447731   25147 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1127 23:55:42.447737   25147 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1127 23:55:42.447743   25147 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1127 23:55:42.447751   25147 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1127 23:55:42.447759   25147 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1127 23:55:42.447768   25147 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1127 23:55:42.447778   25147 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1127 23:55:42.447786   25147 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1127 23:55:42.447796   25147 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1127 23:55:42.447806   25147 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1127 23:55:42.447813   25147 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1127 23:55:42.447821   25147 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1127 23:55:42.447826   25147 command_runner.go:130] > # Example:
	I1127 23:55:42.447831   25147 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1127 23:55:42.447838   25147 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1127 23:55:42.447845   25147 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1127 23:55:42.447856   25147 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1127 23:55:42.447865   25147 command_runner.go:130] > # cpuset = 0
	I1127 23:55:42.447872   25147 command_runner.go:130] > # cpushares = "0-1"
	I1127 23:55:42.447881   25147 command_runner.go:130] > # Where:
	I1127 23:55:42.447889   25147 command_runner.go:130] > # The workload name is workload-type.
	I1127 23:55:42.447902   25147 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1127 23:55:42.447913   25147 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1127 23:55:42.447923   25147 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1127 23:55:42.447938   25147 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1127 23:55:42.447949   25147 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1127 23:55:42.447956   25147 command_runner.go:130] > # 
	I1127 23:55:42.447962   25147 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1127 23:55:42.447968   25147 command_runner.go:130] > #
	I1127 23:55:42.447973   25147 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1127 23:55:42.447981   25147 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1127 23:55:42.447990   25147 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1127 23:55:42.447998   25147 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1127 23:55:42.448006   25147 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1127 23:55:42.448012   25147 command_runner.go:130] > [crio.image]
	I1127 23:55:42.448018   25147 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1127 23:55:42.448025   25147 command_runner.go:130] > # default_transport = "docker://"
	I1127 23:55:42.448031   25147 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1127 23:55:42.448039   25147 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:55:42.448045   25147 command_runner.go:130] > # global_auth_file = ""
	I1127 23:55:42.448051   25147 command_runner.go:130] > # The image used to instantiate infra containers.
	I1127 23:55:42.448058   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:42.448063   25147 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1127 23:55:42.448071   25147 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1127 23:55:42.448079   25147 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1127 23:55:42.448084   25147 command_runner.go:130] > # This option supports live configuration reload.
	I1127 23:55:42.448090   25147 command_runner.go:130] > # pause_image_auth_file = ""
	I1127 23:55:42.448096   25147 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1127 23:55:42.448104   25147 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1127 23:55:42.448112   25147 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1127 23:55:42.448120   25147 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1127 23:55:42.448126   25147 command_runner.go:130] > # pause_command = "/pause"
	I1127 23:55:42.448132   25147 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1127 23:55:42.448140   25147 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1127 23:55:42.448149   25147 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1127 23:55:42.448157   25147 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1127 23:55:42.448164   25147 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1127 23:55:42.448169   25147 command_runner.go:130] > # signature_policy = ""
	I1127 23:55:42.448176   25147 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1127 23:55:42.448185   25147 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1127 23:55:42.448191   25147 command_runner.go:130] > # changing them here.
	I1127 23:55:42.448195   25147 command_runner.go:130] > # insecure_registries = [
	I1127 23:55:42.448202   25147 command_runner.go:130] > # ]
	I1127 23:55:42.448211   25147 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1127 23:55:42.448218   25147 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1127 23:55:42.448225   25147 command_runner.go:130] > # image_volumes = "mkdir"
	I1127 23:55:42.448231   25147 command_runner.go:130] > # Temporary directory to use for storing big files
	I1127 23:55:42.448238   25147 command_runner.go:130] > # big_files_temporary_dir = ""
	I1127 23:55:42.448244   25147 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1127 23:55:42.448250   25147 command_runner.go:130] > # CNI plugins.
	I1127 23:55:42.448254   25147 command_runner.go:130] > [crio.network]
	I1127 23:55:42.448259   25147 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1127 23:55:42.448267   25147 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1127 23:55:42.448273   25147 command_runner.go:130] > # cni_default_network = ""
	I1127 23:55:42.448279   25147 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1127 23:55:42.448286   25147 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1127 23:55:42.448291   25147 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1127 23:55:42.448297   25147 command_runner.go:130] > # plugin_dirs = [
	I1127 23:55:42.448302   25147 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1127 23:55:42.448307   25147 command_runner.go:130] > # ]
	I1127 23:55:42.448313   25147 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1127 23:55:42.448319   25147 command_runner.go:130] > [crio.metrics]
	I1127 23:55:42.448324   25147 command_runner.go:130] > # Globally enable or disable metrics support.
	I1127 23:55:42.448331   25147 command_runner.go:130] > enable_metrics = true
	I1127 23:55:42.448335   25147 command_runner.go:130] > # Specify enabled metrics collectors.
	I1127 23:55:42.448342   25147 command_runner.go:130] > # Per default all metrics are enabled.
	I1127 23:55:42.448348   25147 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1127 23:55:42.448356   25147 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1127 23:55:42.448364   25147 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1127 23:55:42.448371   25147 command_runner.go:130] > # metrics_collectors = [
	I1127 23:55:42.448378   25147 command_runner.go:130] > # 	"operations",
	I1127 23:55:42.448383   25147 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1127 23:55:42.448393   25147 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1127 23:55:42.448400   25147 command_runner.go:130] > # 	"operations_errors",
	I1127 23:55:42.448404   25147 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1127 23:55:42.448410   25147 command_runner.go:130] > # 	"image_pulls_by_name",
	I1127 23:55:42.448414   25147 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1127 23:55:42.448421   25147 command_runner.go:130] > # 	"image_pulls_failures",
	I1127 23:55:42.448425   25147 command_runner.go:130] > # 	"image_pulls_successes",
	I1127 23:55:42.448430   25147 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1127 23:55:42.448434   25147 command_runner.go:130] > # 	"image_layer_reuse",
	I1127 23:55:42.448441   25147 command_runner.go:130] > # 	"containers_oom_total",
	I1127 23:55:42.448445   25147 command_runner.go:130] > # 	"containers_oom",
	I1127 23:55:42.448450   25147 command_runner.go:130] > # 	"processes_defunct",
	I1127 23:55:42.448454   25147 command_runner.go:130] > # 	"operations_total",
	I1127 23:55:42.448460   25147 command_runner.go:130] > # 	"operations_latency_seconds",
	I1127 23:55:42.448465   25147 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1127 23:55:42.448472   25147 command_runner.go:130] > # 	"operations_errors_total",
	I1127 23:55:42.448476   25147 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1127 23:55:42.448483   25147 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1127 23:55:42.448488   25147 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1127 23:55:42.448495   25147 command_runner.go:130] > # 	"image_pulls_success_total",
	I1127 23:55:42.448499   25147 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1127 23:55:42.448506   25147 command_runner.go:130] > # 	"containers_oom_count_total",
	I1127 23:55:42.448509   25147 command_runner.go:130] > # ]
	I1127 23:55:42.448516   25147 command_runner.go:130] > # The port on which the metrics server will listen.
	I1127 23:55:42.448521   25147 command_runner.go:130] > # metrics_port = 9090
	I1127 23:55:42.448528   25147 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1127 23:55:42.448532   25147 command_runner.go:130] > # metrics_socket = ""
	I1127 23:55:42.448539   25147 command_runner.go:130] > # The certificate for the secure metrics server.
	I1127 23:55:42.448547   25147 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1127 23:55:42.448556   25147 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1127 23:55:42.448563   25147 command_runner.go:130] > # certificate on any modification event.
	I1127 23:55:42.448567   25147 command_runner.go:130] > # metrics_cert = ""
	I1127 23:55:42.448573   25147 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1127 23:55:42.448579   25147 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1127 23:55:42.448583   25147 command_runner.go:130] > # metrics_key = ""
	I1127 23:55:42.448590   25147 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1127 23:55:42.448595   25147 command_runner.go:130] > [crio.tracing]
	I1127 23:55:42.448601   25147 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1127 23:55:42.448607   25147 command_runner.go:130] > # enable_tracing = false
	I1127 23:55:42.448612   25147 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1127 23:55:42.448619   25147 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1127 23:55:42.448624   25147 command_runner.go:130] > # Number of samples to collect per million spans.
	I1127 23:55:42.448631   25147 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1127 23:55:42.448637   25147 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1127 23:55:42.448643   25147 command_runner.go:130] > [crio.stats]
	I1127 23:55:42.448649   25147 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1127 23:55:42.448656   25147 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1127 23:55:42.448660   25147 command_runner.go:130] > # stats_collection_period = 0
	I1127 23:55:42.448723   25147 cni.go:84] Creating CNI manager for ""
	I1127 23:55:42.448734   25147 cni.go:136] 2 nodes found, recommending kindnet
	I1127 23:55:42.448743   25147 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:55:42.448776   25147 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-883509 NodeName:multinode-883509-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:55:42.448900   25147 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-883509-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:55:42.448984   25147 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-883509-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:55:42.449044   25147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:55:42.457819   25147 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1127 23:55:42.457861   25147 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1127 23:55:42.457908   25147 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1127 23:55:42.466273   25147 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1127 23:55:42.466296   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1127 23:55:42.466359   25147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1127 23:55:42.466384   25147 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1127 23:55:42.466412   25147 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1127 23:55:42.470531   25147 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1127 23:55:42.470835   25147 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1127 23:55:42.470863   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1127 23:55:43.507281   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1127 23:55:43.507356   25147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1127 23:55:43.512478   25147 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1127 23:55:43.512523   25147 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1127 23:55:43.512548   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1127 23:55:43.817894   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:55:43.831395   25147 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1127 23:55:43.831503   25147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1127 23:55:43.836080   25147 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1127 23:55:43.836119   25147 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1127 23:55:43.836139   25147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1127 23:55:44.368015   25147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1127 23:55:44.377126   25147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1127 23:55:44.392417   25147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:55:44.407504   25147 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1127 23:55:44.411115   25147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:55:44.422579   25147 host.go:66] Checking if "multinode-883509" exists ...
	I1127 23:55:44.422882   25147 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:55:44.422963   25147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:55:44.423007   25147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:55:44.437222   25147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I1127 23:55:44.437586   25147 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:55:44.437996   25147 main.go:141] libmachine: Using API Version  1
	I1127 23:55:44.438017   25147 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:55:44.438294   25147 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:55:44.438481   25147 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:55:44.438627   25147 start.go:304] JoinCluster: &{Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:55:44.438711   25147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1127 23:55:44.438727   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:55:44.441305   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:55:44.441786   25147 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:55:44.441817   25147 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:55:44.441928   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:55:44.442110   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:55:44.442244   25147 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:55:44.442403   25147 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1127 23:55:44.615264   25147 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vue8za.en6hn7z80h47trc5 --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1127 23:55:44.619516   25147 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:55:44.619568   25147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vue8za.en6hn7z80h47trc5 --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-883509-m02"
	I1127 23:55:44.665740   25147 command_runner.go:130] ! W1127 23:55:44.654681     822 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1127 23:55:44.809065   25147 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:55:47.006889   25147 command_runner.go:130] > [preflight] Running pre-flight checks
	I1127 23:55:47.006918   25147 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1127 23:55:47.006933   25147 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1127 23:55:47.006949   25147 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:55:47.006966   25147 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:55:47.006976   25147 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1127 23:55:47.006987   25147 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1127 23:55:47.007001   25147 command_runner.go:130] > This node has joined the cluster:
	I1127 23:55:47.007014   25147 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1127 23:55:47.007026   25147 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1127 23:55:47.007041   25147 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1127 23:55:47.007062   25147 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vue8za.en6hn7z80h47trc5 --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-883509-m02": (2.387478769s)
	I1127 23:55:47.007088   25147 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1127 23:55:47.307011   25147 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1127 23:55:47.307055   25147 start.go:306] JoinCluster complete in 2.868431023s
	I1127 23:55:47.307068   25147 cni.go:84] Creating CNI manager for ""
	I1127 23:55:47.307075   25147 cni.go:136] 2 nodes found, recommending kindnet
	I1127 23:55:47.307129   25147 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1127 23:55:47.314285   25147 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1127 23:55:47.314312   25147 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1127 23:55:47.314326   25147 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1127 23:55:47.314335   25147 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1127 23:55:47.314349   25147 command_runner.go:130] > Access: 2023-11-27 23:54:22.192018215 +0000
	I1127 23:55:47.314357   25147 command_runner.go:130] > Modify: 2023-11-27 22:54:55.000000000 +0000
	I1127 23:55:47.314369   25147 command_runner.go:130] > Change: 2023-11-27 23:54:20.360018215 +0000
	I1127 23:55:47.314375   25147 command_runner.go:130] >  Birth: -
	I1127 23:55:47.314574   25147 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1127 23:55:47.314595   25147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1127 23:55:47.333374   25147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1127 23:55:47.642411   25147 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1127 23:55:47.650713   25147 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1127 23:55:47.654438   25147 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1127 23:55:47.673440   25147 command_runner.go:130] > daemonset.apps/kindnet configured
	I1127 23:55:47.676372   25147 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:55:47.676582   25147 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:55:47.676915   25147 round_trippers.go:463] GET https://192.168.39.159:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1127 23:55:47.676930   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:47.676937   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:47.676943   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:47.679186   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:47.679223   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:47.679233   25147 round_trippers.go:580]     Content-Length: 291
	I1127 23:55:47.679241   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:47 GMT
	I1127 23:55:47.679250   25147 round_trippers.go:580]     Audit-Id: 30ac1aff-6e88-4c6b-8f6e-f882ce3d7f12
	I1127 23:55:47.679258   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:47.679263   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:47.679269   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:47.679275   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:47.679295   25147 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6e7cc9d5-ec42-4b16-9afb-9c3b43521ec6","resourceVersion":"449","creationTimestamp":"2023-11-27T23:54:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1127 23:55:47.679377   25147 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-883509" context rescaled to 1 replicas
	I1127 23:55:47.679404   25147 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1127 23:55:47.681289   25147 out.go:177] * Verifying Kubernetes components...
	I1127 23:55:47.682772   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:55:47.696681   25147 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:55:47.696967   25147 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:55:47.697207   25147 node_ready.go:35] waiting up to 6m0s for node "multinode-883509-m02" to be "Ready" ...
	I1127 23:55:47.697268   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:47.697276   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:47.697283   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:47.697289   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:47.699872   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:47.699891   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:47.699901   25147 round_trippers.go:580]     Audit-Id: 16ddc2c4-ed1c-4b1e-8784-b1e9cf90d988
	I1127 23:55:47.699909   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:47.699917   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:47.699928   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:47.699937   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:47.699945   25147 round_trippers.go:580]     Content-Length: 3530
	I1127 23:55:47.699976   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:47 GMT
	I1127 23:55:47.700295   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"503","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1127 23:55:47.700614   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:47.700638   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:47.700645   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:47.700651   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:47.703951   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:47.703973   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:47.703983   25147 round_trippers.go:580]     Audit-Id: 00bd90e5-6a3f-4a2e-aa2f-c31b07eb285a
	I1127 23:55:47.703993   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:47.703999   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:47.704010   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:47.704020   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:47.704032   25147 round_trippers.go:580]     Content-Length: 3530
	I1127 23:55:47.704043   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:47 GMT
	I1127 23:55:47.704115   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"503","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1127 23:55:48.204690   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:48.204712   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:48.204720   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:48.204726   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:48.208691   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:48.208713   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:48.208721   25147 round_trippers.go:580]     Audit-Id: 82271a2c-0a51-43c7-ae5f-82f5300e8dc9
	I1127 23:55:48.208726   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:48.208731   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:48.208737   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:48.208742   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:48.208747   25147 round_trippers.go:580]     Content-Length: 3530
	I1127 23:55:48.208777   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:48 GMT
	I1127 23:55:48.208867   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"503","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1127 23:55:48.705417   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:48.705439   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:48.705448   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:48.705454   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:48.708268   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:48.708290   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:48.708299   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:48 GMT
	I1127 23:55:48.708308   25147 round_trippers.go:580]     Audit-Id: 320b1911-98f2-47ee-9b5d-4fd8194f21fc
	I1127 23:55:48.708313   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:48.708321   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:48.708326   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:48.708333   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:48.708339   25147 round_trippers.go:580]     Content-Length: 3530
	I1127 23:55:48.708382   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"503","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1127 23:55:49.204855   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:49.204931   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:49.204945   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:49.204953   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:49.208701   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:49.208728   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:49.208739   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:49 GMT
	I1127 23:55:49.208749   25147 round_trippers.go:580]     Audit-Id: 872f30dc-2345-4ef7-ac6b-46390ab50e82
	I1127 23:55:49.208773   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:49.208781   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:49.208790   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:49.208807   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:49.208817   25147 round_trippers.go:580]     Content-Length: 3530
	I1127 23:55:49.209152   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"503","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1127 23:55:49.705029   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:49.705053   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:49.705061   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:49.705067   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:49.708504   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:49.708538   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:49.708549   25147 round_trippers.go:580]     Audit-Id: 3f404f74-098a-4e8c-8ad6-65fceb7b7e29
	I1127 23:55:49.708559   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:49.708567   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:49.708575   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:49.708587   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:49.708596   25147 round_trippers.go:580]     Content-Length: 3530
	I1127 23:55:49.708607   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:49 GMT
	I1127 23:55:49.708695   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"503","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1127 23:55:49.708990   25147 node_ready.go:58] node "multinode-883509-m02" has status "Ready":"False"
	I1127 23:55:50.205243   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:50.205267   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:50.205275   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:50.205281   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:50.207962   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:50.207988   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:50.207998   25147 round_trippers.go:580]     Content-Length: 3530
	I1127 23:55:50.208007   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:50 GMT
	I1127 23:55:50.208015   25147 round_trippers.go:580]     Audit-Id: 13d2dffd-a5f4-42c6-8690-59bce887590c
	I1127 23:55:50.208023   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:50.208030   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:50.208038   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:50.208050   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:50.208162   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"503","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1127 23:55:50.704880   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:50.704903   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:50.704911   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:50.704917   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:50.707610   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:50.707634   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:50.707643   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:50.707654   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:50.707663   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:50.707672   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:50.707681   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:50.707689   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:50 GMT
	I1127 23:55:50.707700   25147 round_trippers.go:580]     Audit-Id: ca889d0c-b32c-417c-bbe6-ff5f5354d4a3
	I1127 23:55:50.707799   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:51.205358   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:51.205381   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:51.205390   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:51.205395   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:51.208968   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:51.208993   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:51.209003   25147 round_trippers.go:580]     Audit-Id: 44648051-7f39-40b9-8228-37b91b1ff9d1
	I1127 23:55:51.209011   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:51.209020   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:51.209037   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:51.209045   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:51.209053   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:51.209061   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:51 GMT
	I1127 23:55:51.209146   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:51.704672   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:51.704696   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:51.704704   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:51.704710   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:51.707453   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:51.707473   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:51.707481   25147 round_trippers.go:580]     Audit-Id: d47223f3-3b39-4acf-aea8-96c337a5e7b4
	I1127 23:55:51.707490   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:51.707495   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:51.707501   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:51.707506   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:51.707511   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:51.707516   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:51 GMT
	I1127 23:55:51.707550   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:52.205180   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:52.205219   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:52.205232   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:52.205241   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:52.210720   25147 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1127 23:55:52.210740   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:52.210746   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:52.210752   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:52.210757   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:52.210763   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:52.210768   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:52.210773   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:52 GMT
	I1127 23:55:52.210778   25147 round_trippers.go:580]     Audit-Id: 5c75e7ab-e883-485a-ae37-909106d4e85f
	I1127 23:55:52.210848   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:52.211150   25147 node_ready.go:58] node "multinode-883509-m02" has status "Ready":"False"
	I1127 23:55:52.705419   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:52.705441   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:52.705449   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:52.705455   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:52.709165   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:52.709189   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:52.709199   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:52 GMT
	I1127 23:55:52.709208   25147 round_trippers.go:580]     Audit-Id: b869f124-5542-4d28-9917-ea7b1d600e1c
	I1127 23:55:52.709213   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:52.709218   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:52.709223   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:52.709229   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:52.709234   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:52.709556   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:53.205256   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:53.205280   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:53.205289   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:53.205298   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:53.207849   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:53.207866   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:53.207873   25147 round_trippers.go:580]     Audit-Id: 7f984df4-0b14-482a-bdad-e1c447903819
	I1127 23:55:53.207879   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:53.207884   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:53.207892   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:53.207900   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:53.207907   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:53.207917   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:53 GMT
	I1127 23:55:53.208046   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:53.704596   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:53.704619   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:53.704628   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:53.704634   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:53.707992   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:53.708017   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:53.708025   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:53.708032   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:53.708037   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:53.708042   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:53.708052   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:53 GMT
	I1127 23:55:53.708058   25147 round_trippers.go:580]     Audit-Id: 0ff856d3-62e6-4fff-a891-cd6dcfe80c4f
	I1127 23:55:53.708063   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:53.708136   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:54.205200   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:54.205223   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:54.205233   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:54.205240   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:54.208919   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:54.208947   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:54.208954   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:54.208960   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:54.208969   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:54.208977   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:54.208989   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:54.208998   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:54 GMT
	I1127 23:55:54.209007   25147 round_trippers.go:580]     Audit-Id: 0cc938a8-92fc-4cae-9587-07bbd12c29ff
	I1127 23:55:54.209127   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:54.705287   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:54.705313   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:54.705325   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:54.705337   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:54.708482   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:54.708498   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:54.708506   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:54.708515   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:54.708523   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:54.708531   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:54.708540   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:54.708554   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:54 GMT
	I1127 23:55:54.708563   25147 round_trippers.go:580]     Audit-Id: 3559e6aa-dfa4-4260-a85f-5ffad1c13144
	I1127 23:55:54.708602   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:54.708880   25147 node_ready.go:58] node "multinode-883509-m02" has status "Ready":"False"
	I1127 23:55:55.205283   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:55.205305   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:55.205313   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:55.205319   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:55.209015   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:55.209037   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:55.209047   25147 round_trippers.go:580]     Audit-Id: 1be2d32f-7842-47d9-b301-5460307e394f
	I1127 23:55:55.209053   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:55.209058   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:55.209063   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:55.209069   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:55.209077   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:55.209082   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:55 GMT
	I1127 23:55:55.209146   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:55.705261   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:55.705286   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:55.705294   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:55.705300   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:55.708433   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:55.708454   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:55.708461   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:55.708467   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:55 GMT
	I1127 23:55:55.708472   25147 round_trippers.go:580]     Audit-Id: ecfc1c56-fb9c-4ce1-8ba7-8d994b605088
	I1127 23:55:55.708478   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:55.708483   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:55.708488   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:55.708493   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:55.708570   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:56.205306   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:56.205328   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.205336   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.205342   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.208575   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:56.208601   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.208608   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.208614   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.208619   25147 round_trippers.go:580]     Content-Length: 3639
	I1127 23:55:56.208625   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.208630   25147 round_trippers.go:580]     Audit-Id: beaa5c56-d493-4f43-8048-4dd78c04367d
	I1127 23:55:56.208636   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.208641   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.208703   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"513","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1127 23:55:56.705315   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:56.705338   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.705346   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.705352   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.708082   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:56.708106   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.708115   25147 round_trippers.go:580]     Content-Length: 3725
	I1127 23:55:56.708124   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.708132   25147 round_trippers.go:580]     Audit-Id: 83910ccb-8a73-4060-8ccf-cb1e1c9eaa3f
	I1127 23:55:56.708140   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.708148   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.708156   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.708164   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.708268   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"532","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I1127 23:55:56.708579   25147 node_ready.go:49] node "multinode-883509-m02" has status "Ready":"True"
	I1127 23:55:56.708603   25147 node_ready.go:38] duration metric: took 9.011381164s waiting for node "multinode-883509-m02" to be "Ready" ...
	I1127 23:55:56.708614   25147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:55:56.708682   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1127 23:55:56.708691   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.708698   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.708704   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.712093   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:56.712107   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.712113   25147 round_trippers.go:580]     Audit-Id: 5c1b2142-af23-4831-8b9b-191121fbede8
	I1127 23:55:56.712118   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.712123   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.712128   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.712133   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.712139   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.713493   25147 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"445","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67364 chars]
	I1127 23:55:56.715400   25147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.715455   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1127 23:55:56.715463   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.715470   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.715477   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.717862   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:56.717881   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.717889   25147 round_trippers.go:580]     Audit-Id: 11379407-173c-445e-b028-d063b4acc138
	I1127 23:55:56.717894   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.717900   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.717907   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.717913   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.717919   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.718318   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"445","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1127 23:55:56.718682   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:56.718693   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.718699   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.718705   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.721023   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:56.721039   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.721045   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.721050   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.721055   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.721063   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.721072   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.721081   25147 round_trippers.go:580]     Audit-Id: c1238ea2-22cf-43aa-8a2d-95d07be96ec6
	I1127 23:55:56.721488   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:56.721739   25147 pod_ready.go:92] pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:56.721752   25147 pod_ready.go:81] duration metric: took 6.335692ms waiting for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.721759   25147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.721793   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1127 23:55:56.721804   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.721811   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.721816   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.723929   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:56.723948   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.723955   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.723964   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.723971   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.723979   25147 round_trippers.go:580]     Audit-Id: 5f1609d9-ca91-49cd-a663-fcbebe56700e
	I1127 23:55:56.723988   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.723999   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.724228   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"451","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1127 23:55:56.724552   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:56.724567   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.724577   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.724586   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.726646   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:56.726659   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.726665   25147 round_trippers.go:580]     Audit-Id: cb465d13-b0eb-4b63-b921-f4c257888108
	I1127 23:55:56.726670   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.726681   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.726692   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.726705   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.726713   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.726959   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:56.727205   25147 pod_ready.go:92] pod "etcd-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:56.727220   25147 pod_ready.go:81] duration metric: took 5.45576ms waiting for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.727239   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.727293   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-883509
	I1127 23:55:56.727302   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.727312   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.727326   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.729006   25147 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:55:56.729021   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.729027   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.729034   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.729042   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.729059   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.729066   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.729071   25147 round_trippers.go:580]     Audit-Id: 0a2b21f8-8950-4cda-a84e-5ae337abe387
	I1127 23:55:56.729326   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-883509","namespace":"kube-system","uid":"0a144c07-5db8-418a-ad15-110fabc7f377","resourceVersion":"452","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.159:8443","kubernetes.io/config.hash":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.mirror":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.seen":"2023-11-27T23:54:53.116543447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1127 23:55:56.729656   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:56.729674   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.729681   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.729687   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.731247   25147 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:55:56.731258   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.731264   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.731269   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.731275   25147 round_trippers.go:580]     Audit-Id: c1f0978f-7b3c-46d8-89f8-ae44f87170e8
	I1127 23:55:56.731286   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.731297   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.731306   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.732064   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:56.732331   25147 pod_ready.go:92] pod "kube-apiserver-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:56.732345   25147 pod_ready.go:81] duration metric: took 5.093931ms waiting for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.732352   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.732398   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-883509
	I1127 23:55:56.732408   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.732419   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.732428   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.734518   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:56.734532   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.734538   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.734543   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.734553   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.734559   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.734564   25147 round_trippers.go:580]     Audit-Id: 8fa8ef31-9931-4ee0-8a0f-c443e21b19e8
	I1127 23:55:56.734569   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.734755   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-883509","namespace":"kube-system","uid":"f8474e48-c333-4772-ae1f-59cdb2bf95eb","resourceVersion":"450","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.mirror":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.seen":"2023-11-27T23:54:53.116544230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1127 23:55:56.735075   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:56.735087   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.735094   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.735099   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.736954   25147 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1127 23:55:56.736965   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.736971   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.736977   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.736982   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.736987   25147 round_trippers.go:580]     Audit-Id: 0a1d9d7d-0d8c-4764-88a0-c8504c4f3e7d
	I1127 23:55:56.736993   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.737002   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.737143   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:56.737489   25147 pod_ready.go:92] pod "kube-controller-manager-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:56.737508   25147 pod_ready.go:81] duration metric: took 5.149811ms waiting for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.737521   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:56.905892   25147 request.go:629] Waited for 168.317025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1127 23:55:56.905960   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1127 23:55:56.905966   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:56.905977   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:56.905994   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:56.909107   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:56.909126   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:56.909134   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:56.909139   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:56 GMT
	I1127 23:55:56.909144   25147 round_trippers.go:580]     Audit-Id: 01d59008-99b4-4413-ad3c-0a286455e7bd
	I1127 23:55:56.909150   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:56.909155   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:56.909165   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:56.909465   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7g246","generateName":"kube-proxy-","namespace":"kube-system","uid":"c03a2053-f013-4269-a5e1-0acfebfc606c","resourceVersion":"417","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1127 23:55:57.106300   25147 request.go:629] Waited for 196.343322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:57.106376   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:57.106383   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:57.106398   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:57.106414   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:57.111359   25147 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1127 23:55:57.111385   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:57.111447   25147 round_trippers.go:580]     Audit-Id: 70ca6e68-eec8-4b61-ad07-d4aa38bbfd31
	I1127 23:55:57.111466   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:57.111474   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:57.111487   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:57.111498   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:57.111507   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:57 GMT
	I1127 23:55:57.111806   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:57.112129   25147 pod_ready.go:92] pod "kube-proxy-7g246" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:57.112152   25147 pod_ready.go:81] duration metric: took 374.624068ms waiting for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:57.112166   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:57.305540   25147 request.go:629] Waited for 193.302115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1127 23:55:57.305629   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1127 23:55:57.305636   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:57.305649   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:57.305659   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:57.309134   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:57.309157   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:57.309167   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:57.309174   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:57.309181   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:57.309189   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:57 GMT
	I1127 23:55:57.309197   25147 round_trippers.go:580]     Audit-Id: c763737e-8d86-4ced-8af6-9f52ac66d0d3
	I1127 23:55:57.309208   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:57.309529   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvsj6","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0e7a02e-868c-4774-885c-8b5ad728f451","resourceVersion":"519","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1127 23:55:57.506368   25147 request.go:629] Waited for 196.404015ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:57.506436   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1127 23:55:57.506441   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:57.506449   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:57.506455   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:57.509615   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:57.509636   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:57.509643   25147 round_trippers.go:580]     Audit-Id: 423a33f1-a69f-47dd-b690-e6932b5e1dde
	I1127 23:55:57.509649   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:57.509654   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:57.509659   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:57.509664   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:57.509669   25147 round_trippers.go:580]     Content-Length: 3725
	I1127 23:55:57.509674   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:57 GMT
	I1127 23:55:57.509736   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"532","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I1127 23:55:57.509965   25147 pod_ready.go:92] pod "kube-proxy-fvsj6" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:57.509978   25147 pod_ready.go:81] duration metric: took 397.805348ms waiting for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:57.509987   25147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:57.705334   25147 request.go:629] Waited for 195.295555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1127 23:55:57.705424   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1127 23:55:57.705435   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:57.705443   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:57.705452   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:57.708965   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:57.708979   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:57.708985   25147 round_trippers.go:580]     Audit-Id: fe66f597-ba38-4d88-9d01-7d01a457f78c
	I1127 23:55:57.708990   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:57.708995   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:57.709005   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:57.709022   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:57.709034   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:57 GMT
	I1127 23:55:57.709194   25147 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-883509","namespace":"kube-system","uid":"191f6a8c-7604-4f03-ba5a-d717b27f634b","resourceVersion":"453","creationTimestamp":"2023-11-27T23:54:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.mirror":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.seen":"2023-11-27T23:54:44.598174974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1127 23:55:57.905889   25147 request.go:629] Waited for 196.322488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:57.905962   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1127 23:55:57.905966   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:57.905974   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:57.905981   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:57.908587   25147 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1127 23:55:57.908605   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:57.908611   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:57.908616   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:57.908622   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:57.908627   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:57.908632   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:57 GMT
	I1127 23:55:57.908637   25147 round_trippers.go:580]     Audit-Id: 7d81af87-8386-4d77-95bc-fe2f9132dc2a
	I1127 23:55:57.909209   25147 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1127 23:55:57.909520   25147 pod_ready.go:92] pod "kube-scheduler-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1127 23:55:57.909538   25147 pod_ready.go:81] duration metric: took 399.543758ms waiting for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1127 23:55:57.909548   25147 pod_ready.go:38] duration metric: took 1.200918797s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:55:57.909562   25147 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:55:57.909614   25147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:55:57.924924   25147 system_svc.go:56] duration metric: took 15.357425ms WaitForService to wait for kubelet.
	I1127 23:55:57.924945   25147 kubeadm.go:581] duration metric: took 10.245514816s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:55:57.924961   25147 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:55:58.106414   25147 request.go:629] Waited for 181.384971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I1127 23:55:58.106495   25147 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I1127 23:55:58.106506   25147 round_trippers.go:469] Request Headers:
	I1127 23:55:58.106518   25147 round_trippers.go:473]     Accept: application/json, */*
	I1127 23:55:58.106537   25147 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1127 23:55:58.109603   25147 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1127 23:55:58.109623   25147 round_trippers.go:577] Response Headers:
	I1127 23:55:58.109630   25147 round_trippers.go:580]     Audit-Id: 2f7c089e-7831-4ba5-bdf5-16b33ebe2d9f
	I1127 23:55:58.109636   25147 round_trippers.go:580]     Cache-Control: no-cache, private
	I1127 23:55:58.109641   25147 round_trippers.go:580]     Content-Type: application/json
	I1127 23:55:58.109646   25147 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1127 23:55:58.109651   25147 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1127 23:55:58.109656   25147 round_trippers.go:580]     Date: Mon, 27 Nov 2023 23:55:58 GMT
	I1127 23:55:58.110170   25147 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"534"},"items":[{"metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"428","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9645 chars]
	I1127 23:55:58.110729   25147 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 23:55:58.110752   25147 node_conditions.go:123] node cpu capacity is 2
	I1127 23:55:58.110765   25147 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 23:55:58.110777   25147 node_conditions.go:123] node cpu capacity is 2
	I1127 23:55:58.110783   25147 node_conditions.go:105] duration metric: took 185.817167ms to run NodePressure ...
	I1127 23:55:58.110799   25147 start.go:228] waiting for startup goroutines ...
	I1127 23:55:58.110831   25147 start.go:242] writing updated cluster config ...
	I1127 23:55:58.111189   25147 ssh_runner.go:195] Run: rm -f paused
	I1127 23:55:58.157381   25147 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 23:55:58.160946   25147 out.go:177] * Done! kubectl is now configured to use "multinode-883509" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-11-27 23:54:21 UTC, ends at Mon 2023-11-27 23:56:05 UTC. --
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.700973190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701129365700955302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e9452c4a-a0d3-48ca-a563-6b937cc05dfd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.701693126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=642321dc-108e-4ec7-b377-39070e5f02dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.701766341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=642321dc-108e-4ec7-b377-39070e5f02dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.701949809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c432083958bfe3a2e11e4e2271b08eb1c3501032ac21d2e37112aa4c7937ceba,PodSandboxId:3e9b655a665215099afb059e869b2e94e248309d781a0c2d6e0f70b0348e7e7d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701129362281888802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: e95f39a2,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13031d7d2f5b42dfb362dfb85a38a089e0e318db4eeb5d2d7b0864e963b3e7af,PodSandboxId:81e7b71fcc05ad72e25796dc3ad2efa715d93ed56fa80f891b814ba235163401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701129311670382331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,},Annotations:map[string]string{io.kubernetes.container.hash: d384be83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a87b562be857b03e7be901ff5cafcfc2fff96c93cc0fd67cd8f492902ad16,PodSandboxId:08926e07cdf41dff1a15214cd77eb200c726646b86d986515aa8e6354af7ec82,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701129311354402899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e05479333fcc6f18c7773bab67e8d1f98d609f615e4e9076b312c167f6b8401,PodSandboxId:869f221db3a06bbb20276b5bbf7c735ff38bc5db8e87c2f41d7b6af5dadc1f77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701129309103507487,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: acbfe061-9a56-4999-baed-ef8d73dc9222,},Annotations:map[string]string{io.kubernetes.container.hash: 78700d1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845ee4f9dd6f6c4705ce511789599cbe3a5108b38ea6c09f24cffd4783b1668,PodSandboxId:39338faaa65cb2f21bb36e7996e7374028ca513c333895bf81af7ad6b4a0d79f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701129306918714719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfeb
fc606c,},Annotations:map[string]string{io.kubernetes.container.hash: da634c38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b7a82a14651e496e7a257734c869851c9f3d14c95d6269ea9f525cf1c0efcb,PodSandboxId:a53fab4f651b92ee45469b7f6670f85a86945cb791bee71f15b84501cf3a972c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701129286125603933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9772295e4070c7a20d8cdb4d59a0dfb2b0bfb29f5968bf5fa4f60eb07ed89ad6,PodSandboxId:9b48276c5007b47116da6280dd96c887289918620b55780989e59dc9a53d097d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701129285896310742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,},Annotations:map[string]string{io.kubernetes.container.h
ash: a686aad8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20c635ccf67e663f0c39e76c571c9333f1e9b985a5dba6a137bc8e3af2bfd8d,PodSandboxId:390d3a24336c5d795c478f4361bd1b800e5fc9a6fe17fd6ca9037b69ca7c87a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701129285781690026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279
e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:077913239f11cb63f27af5d0219376fbf4fff7d644b401d16b35b5ba11047b51,PodSandboxId:491e6e5bbe923c03b3c45ef5850cc0d53e74984505d916e745a298746c318842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701129285612281530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=642321dc-108e-4ec7-b377-39070e5f02dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.741001741Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=47398cde-1105-4ae6-89f0-6edea8bd4995 name=/runtime.v1.RuntimeService/Version
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.741144466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=47398cde-1105-4ae6-89f0-6edea8bd4995 name=/runtime.v1.RuntimeService/Version
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.742015003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2505c60c-f29c-4ebb-972b-b4da74625441 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.742512378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701129365742500284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2505c60c-f29c-4ebb-972b-b4da74625441 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.743693629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8adbaaac-e8e9-4ce8-ac3e-45c33534ad30 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.743767352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8adbaaac-e8e9-4ce8-ac3e-45c33534ad30 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.743949466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c432083958bfe3a2e11e4e2271b08eb1c3501032ac21d2e37112aa4c7937ceba,PodSandboxId:3e9b655a665215099afb059e869b2e94e248309d781a0c2d6e0f70b0348e7e7d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701129362281888802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: e95f39a2,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13031d7d2f5b42dfb362dfb85a38a089e0e318db4eeb5d2d7b0864e963b3e7af,PodSandboxId:81e7b71fcc05ad72e25796dc3ad2efa715d93ed56fa80f891b814ba235163401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701129311670382331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,},Annotations:map[string]string{io.kubernetes.container.hash: d384be83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a87b562be857b03e7be901ff5cafcfc2fff96c93cc0fd67cd8f492902ad16,PodSandboxId:08926e07cdf41dff1a15214cd77eb200c726646b86d986515aa8e6354af7ec82,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701129311354402899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e05479333fcc6f18c7773bab67e8d1f98d609f615e4e9076b312c167f6b8401,PodSandboxId:869f221db3a06bbb20276b5bbf7c735ff38bc5db8e87c2f41d7b6af5dadc1f77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701129309103507487,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: acbfe061-9a56-4999-baed-ef8d73dc9222,},Annotations:map[string]string{io.kubernetes.container.hash: 78700d1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845ee4f9dd6f6c4705ce511789599cbe3a5108b38ea6c09f24cffd4783b1668,PodSandboxId:39338faaa65cb2f21bb36e7996e7374028ca513c333895bf81af7ad6b4a0d79f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701129306918714719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfeb
fc606c,},Annotations:map[string]string{io.kubernetes.container.hash: da634c38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b7a82a14651e496e7a257734c869851c9f3d14c95d6269ea9f525cf1c0efcb,PodSandboxId:a53fab4f651b92ee45469b7f6670f85a86945cb791bee71f15b84501cf3a972c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701129286125603933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9772295e4070c7a20d8cdb4d59a0dfb2b0bfb29f5968bf5fa4f60eb07ed89ad6,PodSandboxId:9b48276c5007b47116da6280dd96c887289918620b55780989e59dc9a53d097d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701129285896310742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,},Annotations:map[string]string{io.kubernetes.container.h
ash: a686aad8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20c635ccf67e663f0c39e76c571c9333f1e9b985a5dba6a137bc8e3af2bfd8d,PodSandboxId:390d3a24336c5d795c478f4361bd1b800e5fc9a6fe17fd6ca9037b69ca7c87a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701129285781690026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279
e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:077913239f11cb63f27af5d0219376fbf4fff7d644b401d16b35b5ba11047b51,PodSandboxId:491e6e5bbe923c03b3c45ef5850cc0d53e74984505d916e745a298746c318842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701129285612281530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8adbaaac-e8e9-4ce8-ac3e-45c33534ad30 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.786122315Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=06e97880-90b1-431e-ab71-4de6af10cee2 name=/runtime.v1.RuntimeService/Version
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.786210822Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=06e97880-90b1-431e-ab71-4de6af10cee2 name=/runtime.v1.RuntimeService/Version
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.787372513Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=595cb6b6-c559-4507-954b-b3d090f8b349 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.787810645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701129365787793960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=595cb6b6-c559-4507-954b-b3d090f8b349 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.788628327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8be79dfc-cb00-46f0-a7f3-78849ad2deac name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.788731348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8be79dfc-cb00-46f0-a7f3-78849ad2deac name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.789149792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c432083958bfe3a2e11e4e2271b08eb1c3501032ac21d2e37112aa4c7937ceba,PodSandboxId:3e9b655a665215099afb059e869b2e94e248309d781a0c2d6e0f70b0348e7e7d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701129362281888802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: e95f39a2,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13031d7d2f5b42dfb362dfb85a38a089e0e318db4eeb5d2d7b0864e963b3e7af,PodSandboxId:81e7b71fcc05ad72e25796dc3ad2efa715d93ed56fa80f891b814ba235163401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701129311670382331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,},Annotations:map[string]string{io.kubernetes.container.hash: d384be83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a87b562be857b03e7be901ff5cafcfc2fff96c93cc0fd67cd8f492902ad16,PodSandboxId:08926e07cdf41dff1a15214cd77eb200c726646b86d986515aa8e6354af7ec82,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701129311354402899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e05479333fcc6f18c7773bab67e8d1f98d609f615e4e9076b312c167f6b8401,PodSandboxId:869f221db3a06bbb20276b5bbf7c735ff38bc5db8e87c2f41d7b6af5dadc1f77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701129309103507487,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: acbfe061-9a56-4999-baed-ef8d73dc9222,},Annotations:map[string]string{io.kubernetes.container.hash: 78700d1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845ee4f9dd6f6c4705ce511789599cbe3a5108b38ea6c09f24cffd4783b1668,PodSandboxId:39338faaa65cb2f21bb36e7996e7374028ca513c333895bf81af7ad6b4a0d79f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701129306918714719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfeb
fc606c,},Annotations:map[string]string{io.kubernetes.container.hash: da634c38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b7a82a14651e496e7a257734c869851c9f3d14c95d6269ea9f525cf1c0efcb,PodSandboxId:a53fab4f651b92ee45469b7f6670f85a86945cb791bee71f15b84501cf3a972c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701129286125603933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9772295e4070c7a20d8cdb4d59a0dfb2b0bfb29f5968bf5fa4f60eb07ed89ad6,PodSandboxId:9b48276c5007b47116da6280dd96c887289918620b55780989e59dc9a53d097d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701129285896310742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,},Annotations:map[string]string{io.kubernetes.container.h
ash: a686aad8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20c635ccf67e663f0c39e76c571c9333f1e9b985a5dba6a137bc8e3af2bfd8d,PodSandboxId:390d3a24336c5d795c478f4361bd1b800e5fc9a6fe17fd6ca9037b69ca7c87a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701129285781690026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279
e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:077913239f11cb63f27af5d0219376fbf4fff7d644b401d16b35b5ba11047b51,PodSandboxId:491e6e5bbe923c03b3c45ef5850cc0d53e74984505d916e745a298746c318842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701129285612281530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8be79dfc-cb00-46f0-a7f3-78849ad2deac name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.828719846Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bce16ed3-b783-40b6-bc61-3ad3399f7e7a name=/runtime.v1.RuntimeService/Version
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.828806066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bce16ed3-b783-40b6-bc61-3ad3399f7e7a name=/runtime.v1.RuntimeService/Version
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.830902943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7866dbfd-513e-4e43-a8c0-2cd60a22fdfb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.831437246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701129365831421818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7866dbfd-513e-4e43-a8c0-2cd60a22fdfb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.831996987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=309a25ae-bbaf-4176-8d40-9800eb8f5916 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.832104338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=309a25ae-bbaf-4176-8d40-9800eb8f5916 name=/runtime.v1.RuntimeService/ListContainers
	Nov 27 23:56:05 multinode-883509 crio[712]: time="2023-11-27 23:56:05.832297070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c432083958bfe3a2e11e4e2271b08eb1c3501032ac21d2e37112aa4c7937ceba,PodSandboxId:3e9b655a665215099afb059e869b2e94e248309d781a0c2d6e0f70b0348e7e7d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701129362281888802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: e95f39a2,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13031d7d2f5b42dfb362dfb85a38a089e0e318db4eeb5d2d7b0864e963b3e7af,PodSandboxId:81e7b71fcc05ad72e25796dc3ad2efa715d93ed56fa80f891b814ba235163401,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701129311670382331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,},Annotations:map[string]string{io.kubernetes.container.hash: d384be83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a87b562be857b03e7be901ff5cafcfc2fff96c93cc0fd67cd8f492902ad16,PodSandboxId:08926e07cdf41dff1a15214cd77eb200c726646b86d986515aa8e6354af7ec82,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701129311354402899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e05479333fcc6f18c7773bab67e8d1f98d609f615e4e9076b312c167f6b8401,PodSandboxId:869f221db3a06bbb20276b5bbf7c735ff38bc5db8e87c2f41d7b6af5dadc1f77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701129309103507487,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: acbfe061-9a56-4999-baed-ef8d73dc9222,},Annotations:map[string]string{io.kubernetes.container.hash: 78700d1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b845ee4f9dd6f6c4705ce511789599cbe3a5108b38ea6c09f24cffd4783b1668,PodSandboxId:39338faaa65cb2f21bb36e7996e7374028ca513c333895bf81af7ad6b4a0d79f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701129306918714719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfeb
fc606c,},Annotations:map[string]string{io.kubernetes.container.hash: da634c38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b7a82a14651e496e7a257734c869851c9f3d14c95d6269ea9f525cf1c0efcb,PodSandboxId:a53fab4f651b92ee45469b7f6670f85a86945cb791bee71f15b84501cf3a972c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701129286125603933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9772295e4070c7a20d8cdb4d59a0dfb2b0bfb29f5968bf5fa4f60eb07ed89ad6,PodSandboxId:9b48276c5007b47116da6280dd96c887289918620b55780989e59dc9a53d097d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701129285896310742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,},Annotations:map[string]string{io.kubernetes.container.h
ash: a686aad8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20c635ccf67e663f0c39e76c571c9333f1e9b985a5dba6a137bc8e3af2bfd8d,PodSandboxId:390d3a24336c5d795c478f4361bd1b800e5fc9a6fe17fd6ca9037b69ca7c87a1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701129285781690026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279
e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:077913239f11cb63f27af5d0219376fbf4fff7d644b401d16b35b5ba11047b51,PodSandboxId:491e6e5bbe923c03b3c45ef5850cc0d53e74984505d916e745a298746c318842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701129285612281530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=309a25ae-bbaf-4176-8d40-9800eb8f5916 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c432083958bfe       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   3e9b655a66521       busybox-5bc68d56bd-9qz8x
	13031d7d2f5b4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      54 seconds ago       Running             coredns                   0                   81e7b71fcc05a       coredns-5dd5756b68-9vws5
	476a87b562be8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      54 seconds ago       Running             storage-provisioner       0                   08926e07cdf41       storage-provisioner
	2e05479333fcc       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      56 seconds ago       Running             kindnet-cni               0                   869f221db3a06       kindnet-ztt77
	b845ee4f9dd6f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      58 seconds ago       Running             kube-proxy                0                   39338faaa65cb       kube-proxy-7g246
	a0b7a82a14651       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   a53fab4f651b9       kube-scheduler-multinode-883509
	9772295e4070c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   9b48276c5007b       etcd-multinode-883509
	e20c635ccf67e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   390d3a24336c5       kube-apiserver-multinode-883509
	077913239f11c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   491e6e5bbe923       kube-controller-manager-multinode-883509
	
	* 
	* ==> coredns [13031d7d2f5b42dfb362dfb85a38a089e0e318db4eeb5d2d7b0864e963b3e7af] <==
	* [INFO] 10.244.0.3:38353 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000164338s
	[INFO] 10.244.1.2:33221 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137554s
	[INFO] 10.244.1.2:34299 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001835563s
	[INFO] 10.244.1.2:43415 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088867s
	[INFO] 10.244.1.2:47539 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076901s
	[INFO] 10.244.1.2:44410 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001183562s
	[INFO] 10.244.1.2:48364 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074928s
	[INFO] 10.244.1.2:48441 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127081s
	[INFO] 10.244.1.2:51620 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008582s
	[INFO] 10.244.0.3:33538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203461s
	[INFO] 10.244.0.3:48859 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083189s
	[INFO] 10.244.0.3:57614 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000174417s
	[INFO] 10.244.0.3:41836 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067262s
	[INFO] 10.244.1.2:47430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123694s
	[INFO] 10.244.1.2:49155 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161052s
	[INFO] 10.244.1.2:48964 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000243718s
	[INFO] 10.244.1.2:55291 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132514s
	[INFO] 10.244.0.3:36785 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163279s
	[INFO] 10.244.0.3:48960 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000136764s
	[INFO] 10.244.0.3:44554 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088925s
	[INFO] 10.244.0.3:57091 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000096737s
	[INFO] 10.244.1.2:43090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134412s
	[INFO] 10.244.1.2:39109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015565s
	[INFO] 10.244.1.2:35922 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087364s
	[INFO] 10.244.1.2:60331 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00016672s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-883509
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-883509
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=multinode-883509
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_54_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:54:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-883509
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:56:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:55:10 +0000   Mon, 27 Nov 2023 23:54:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:55:10 +0000   Mon, 27 Nov 2023 23:54:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:55:10 +0000   Mon, 27 Nov 2023 23:54:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:55:10 +0000   Mon, 27 Nov 2023 23:55:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    multinode-883509
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c49f7519e69445bb7f042f71f13e7f0
	  System UUID:                6c49f751-9e69-445b-b7f0-42f71f13e7f0
	  Boot ID:                    679b7e65-aef4-4ab6-8844-35d24916ea2e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-9qz8x                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-9vws5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 etcd-multinode-883509                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-ztt77                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      61s
	  kube-system                 kube-apiserver-multinode-883509             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-multinode-883509    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-7g246                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-multinode-883509             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-883509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-883509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-883509 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s                kubelet          Node multinode-883509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s                kubelet          Node multinode-883509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s                kubelet          Node multinode-883509 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                node-controller  Node multinode-883509 event: Registered Node multinode-883509 in Controller
	  Normal  NodeReady                56s                kubelet          Node multinode-883509 status is now: NodeReady
	
	
	Name:               multinode-883509-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-883509-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:55:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-883509-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:55:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:55:56 +0000   Mon, 27 Nov 2023 23:55:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:55:56 +0000   Mon, 27 Nov 2023 23:55:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:55:56 +0000   Mon, 27 Nov 2023 23:55:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:55:56 +0000   Mon, 27 Nov 2023 23:55:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    multinode-883509-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a88c66405d8044d3aa460988e7c5446e
	  System UUID:                a88c6640-5d80-44d3-aa46-0988e7c5446e
	  Boot ID:                    4813c5b7-040a-4b4c-8928-bbc37f7efde1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-lgwvm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-t4wlq               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20s
	  kube-system                 kube-proxy-fvsj6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientMemory  20s (x5 over 21s)  kubelet          Node multinode-883509-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x5 over 21s)  kubelet          Node multinode-883509-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x5 over 21s)  kubelet          Node multinode-883509-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16s                node-controller  Node multinode-883509-m02 event: Registered Node multinode-883509-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-883509-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Nov27 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067977] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.343402] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.333647] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144933] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.057081] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.381582] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.108693] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.149815] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.107008] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.208426] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[  +9.198161] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[  +8.784477] systemd-fstab-generator[1251]: Ignoring "noauto" for root device
	[Nov27 23:55] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [9772295e4070c7a20d8cdb4d59a0dfb2b0bfb29f5968bf5fa4f60eb07ed89ad6] <==
	* {"level":"info","ts":"2023-11-27T23:54:47.604134Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f0ef8018a32f46af","initial-advertise-peer-urls":["https://192.168.39.159:2380"],"listen-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-27T23:54:47.604235Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-27T23:54:47.604355Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2023-11-27T23:54:47.607175Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2023-11-27T23:54:47.607565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af switched to configuration voters=(17361235931841906351)"}
	{"level":"info","ts":"2023-11-27T23:54:47.607882Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","added-peer-id":"f0ef8018a32f46af","added-peer-peer-urls":["https://192.168.39.159:2380"]}
	{"level":"info","ts":"2023-11-27T23:54:48.147999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-27T23:54:48.148161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-27T23:54:48.148179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgPreVoteResp from f0ef8018a32f46af at term 1"}
	{"level":"info","ts":"2023-11-27T23:54:48.148212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became candidate at term 2"}
	{"level":"info","ts":"2023-11-27T23:54:48.148218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgVoteResp from f0ef8018a32f46af at term 2"}
	{"level":"info","ts":"2023-11-27T23:54:48.148226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became leader at term 2"}
	{"level":"info","ts":"2023-11-27T23:54:48.148234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0ef8018a32f46af elected leader f0ef8018a32f46af at term 2"}
	{"level":"info","ts":"2023-11-27T23:54:48.1496Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f0ef8018a32f46af","local-member-attributes":"{Name:multinode-883509 ClientURLs:[https://192.168.39.159:2379]}","request-path":"/0/members/f0ef8018a32f46af/attributes","cluster-id":"bc02953927cca850","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-27T23:54:48.149737Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:54:48.14989Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:54:48.150911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-27T23:54:48.150915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	{"level":"info","ts":"2023-11-27T23:54:48.150995Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:54:48.15123Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-27T23:54:48.151263Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-27T23:54:48.152426Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:54:48.152513Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:54:48.152557Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:55:45.920909Z","caller":"traceutil/trace.go:171","msg":"trace[918760848] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"146.109697ms","start":"2023-11-27T23:55:45.77476Z","end":"2023-11-27T23:55:45.92087Z","steps":["trace[918760848] 'process raft request'  (duration: 145.946714ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:56:06 up 1 min,  0 users,  load average: 0.54, 0.25, 0.09
	Linux multinode-883509 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2e05479333fcc6f18c7773bab67e8d1f98d609f615e4e9076b312c167f6b8401] <==
	* I1127 23:55:09.855865       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1127 23:55:09.855969       1 main.go:107] hostIP = 192.168.39.159
	podIP = 192.168.39.159
	I1127 23:55:09.856298       1 main.go:116] setting mtu 1500 for CNI 
	I1127 23:55:09.856309       1 main.go:146] kindnetd IP family: "ipv4"
	I1127 23:55:09.856329       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1127 23:55:10.353096       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1127 23:55:10.353180       1 main.go:227] handling current node
	I1127 23:55:20.364209       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1127 23:55:20.364259       1 main.go:227] handling current node
	I1127 23:55:30.373925       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1127 23:55:30.373988       1 main.go:227] handling current node
	I1127 23:55:40.378456       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1127 23:55:40.378518       1 main.go:227] handling current node
	I1127 23:55:50.391686       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1127 23:55:50.391740       1 main.go:227] handling current node
	I1127 23:55:50.391754       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I1127 23:55:50.391760       1 main.go:250] Node multinode-883509-m02 has CIDR [10.244.1.0/24] 
	I1127 23:55:50.391978       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.97 Flags: [] Table: 0} 
	I1127 23:56:00.397235       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1127 23:56:00.397318       1 main.go:227] handling current node
	I1127 23:56:00.397342       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I1127 23:56:00.397360       1 main.go:250] Node multinode-883509-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [e20c635ccf67e663f0c39e76c571c9333f1e9b985a5dba6a137bc8e3af2bfd8d] <==
	* I1127 23:54:49.650888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1127 23:54:49.650901       1 shared_informer.go:318] Caches are synced for configmaps
	I1127 23:54:49.659709       1 controller.go:624] quota admission added evaluator for: namespaces
	I1127 23:54:49.674319       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1127 23:54:49.689193       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1127 23:54:49.689263       1 aggregator.go:166] initial CRD sync complete...
	I1127 23:54:49.689290       1 autoregister_controller.go:141] Starting autoregister controller
	I1127 23:54:49.689314       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1127 23:54:49.689338       1 cache.go:39] Caches are synced for autoregister controller
	I1127 23:54:49.720662       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 23:54:50.543110       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1127 23:54:50.551091       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1127 23:54:50.551150       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1127 23:54:51.152308       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 23:54:51.199558       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1127 23:54:51.290793       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1127 23:54:51.301253       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.159]
	I1127 23:54:51.302882       1 controller.go:624] quota admission added evaluator for: endpoints
	I1127 23:54:51.310274       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1127 23:54:51.659864       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1127 23:54:52.951976       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1127 23:54:52.967076       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1127 23:54:52.984657       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1127 23:55:05.472173       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1127 23:55:05.544355       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [077913239f11cb63f27af5d0219376fbf4fff7d644b401d16b35b5ba11047b51] <==
	* I1127 23:55:06.017558       1 shared_informer.go:318] Caches are synced for garbage collector
	I1127 23:55:10.593841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.936µs"
	I1127 23:55:10.629790       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.287µs"
	I1127 23:55:12.315327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.164942ms"
	I1127 23:55:12.315613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.235µs"
	I1127 23:55:15.450282       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1127 23:55:46.771530       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-883509-m02\" does not exist"
	I1127 23:55:46.796113       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-883509-m02" podCIDRs=["10.244.1.0/24"]
	I1127 23:55:46.802673       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fvsj6"
	I1127 23:55:46.802742       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t4wlq"
	I1127 23:55:50.457006       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-883509-m02"
	I1127 23:55:50.457181       1 event.go:307] "Event occurred" object="multinode-883509-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-883509-m02 event: Registered Node multinode-883509-m02 in Controller"
	I1127 23:55:56.481383       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-883509-m02"
	I1127 23:55:58.816874       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1127 23:55:58.830529       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-lgwvm"
	I1127 23:55:58.847780       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-9qz8x"
	I1127 23:55:58.880118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.630678ms"
	I1127 23:55:58.895396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.176007ms"
	I1127 23:55:58.895570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.24µs"
	I1127 23:55:58.896829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.163µs"
	I1127 23:56:00.473370       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-lgwvm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-lgwvm"
	I1127 23:56:02.453399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.023544ms"
	I1127 23:56:02.453527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.28µs"
	I1127 23:56:02.483886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.590043ms"
	I1127 23:56:02.484666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.056µs"
	
	* 
	* ==> kube-proxy [b845ee4f9dd6f6c4705ce511789599cbe3a5108b38ea6c09f24cffd4783b1668] <==
	* I1127 23:55:07.209772       1 server_others.go:69] "Using iptables proxy"
	I1127 23:55:07.249227       1 node.go:141] Successfully retrieved node IP: 192.168.39.159
	I1127 23:55:07.299113       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1127 23:55:07.299154       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1127 23:55:07.301692       1 server_others.go:152] "Using iptables Proxier"
	I1127 23:55:07.301756       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 23:55:07.302231       1 server.go:846] "Version info" version="v1.28.4"
	I1127 23:55:07.302281       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 23:55:07.303758       1 config.go:188] "Starting service config controller"
	I1127 23:55:07.303806       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 23:55:07.303829       1 config.go:97] "Starting endpoint slice config controller"
	I1127 23:55:07.303856       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 23:55:07.305561       1 config.go:315] "Starting node config controller"
	I1127 23:55:07.305601       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 23:55:07.404767       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1127 23:55:07.404858       1 shared_informer.go:318] Caches are synced for service config
	I1127 23:55:07.406377       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a0b7a82a14651e496e7a257734c869851c9f3d14c95d6269ea9f525cf1c0efcb] <==
	* W1127 23:54:49.712152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:54:49.712269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1127 23:54:49.712512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 23:54:49.712628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1127 23:54:49.712868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:54:49.712960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1127 23:54:49.714407       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:54:49.714453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1127 23:54:49.715635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 23:54:49.715762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1127 23:54:50.548418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:54:50.548581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1127 23:54:50.616554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:54:50.616641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1127 23:54:50.665797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:54:50.665875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1127 23:54:50.781247       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:54:50.781334       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1127 23:54:50.807105       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1127 23:54:50.807188       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1127 23:54:50.844357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:54:50.844432       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1127 23:54:50.954462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:54:50.954544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1127 23:54:52.588133       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-11-27 23:54:21 UTC, ends at Mon 2023-11-27 23:56:06 UTC. --
	Nov 27 23:55:05 multinode-883509 kubelet[1258]: I1127 23:55:05.698194    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8j9c\" (UniqueName: \"kubernetes.io/projected/c03a2053-f013-4269-a5e1-0acfebfc606c-kube-api-access-v8j9c\") pod \"kube-proxy-7g246\" (UID: \"c03a2053-f013-4269-a5e1-0acfebfc606c\") " pod="kube-system/kube-proxy-7g246"
	Nov 27 23:55:05 multinode-883509 kubelet[1258]: I1127 23:55:05.698267    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acbfe061-9a56-4999-baed-ef8d73dc9222-xtables-lock\") pod \"kindnet-ztt77\" (UID: \"acbfe061-9a56-4999-baed-ef8d73dc9222\") " pod="kube-system/kindnet-ztt77"
	Nov 27 23:55:05 multinode-883509 kubelet[1258]: I1127 23:55:05.698293    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acbfe061-9a56-4999-baed-ef8d73dc9222-lib-modules\") pod \"kindnet-ztt77\" (UID: \"acbfe061-9a56-4999-baed-ef8d73dc9222\") " pod="kube-system/kindnet-ztt77"
	Nov 27 23:55:05 multinode-883509 kubelet[1258]: I1127 23:55:05.698319    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c03a2053-f013-4269-a5e1-0acfebfc606c-kube-proxy\") pod \"kube-proxy-7g246\" (UID: \"c03a2053-f013-4269-a5e1-0acfebfc606c\") " pod="kube-system/kube-proxy-7g246"
	Nov 27 23:55:05 multinode-883509 kubelet[1258]: I1127 23:55:05.698339    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c03a2053-f013-4269-a5e1-0acfebfc606c-xtables-lock\") pod \"kube-proxy-7g246\" (UID: \"c03a2053-f013-4269-a5e1-0acfebfc606c\") " pod="kube-system/kube-proxy-7g246"
	Nov 27 23:55:05 multinode-883509 kubelet[1258]: I1127 23:55:05.698370    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/acbfe061-9a56-4999-baed-ef8d73dc9222-cni-cfg\") pod \"kindnet-ztt77\" (UID: \"acbfe061-9a56-4999-baed-ef8d73dc9222\") " pod="kube-system/kindnet-ztt77"
	Nov 27 23:55:05 multinode-883509 kubelet[1258]: I1127 23:55:05.698398    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vfg\" (UniqueName: \"kubernetes.io/projected/acbfe061-9a56-4999-baed-ef8d73dc9222-kube-api-access-w9vfg\") pod \"kindnet-ztt77\" (UID: \"acbfe061-9a56-4999-baed-ef8d73dc9222\") " pod="kube-system/kindnet-ztt77"
	Nov 27 23:55:05 multinode-883509 kubelet[1258]: I1127 23:55:05.698419    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c03a2053-f013-4269-a5e1-0acfebfc606c-lib-modules\") pod \"kube-proxy-7g246\" (UID: \"c03a2053-f013-4269-a5e1-0acfebfc606c\") " pod="kube-system/kube-proxy-7g246"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.252943    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7g246" podStartSLOduration=5.252888798 podCreationTimestamp="2023-11-27 23:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:55:07.236925785 +0000 UTC m=+14.282012482" watchObservedRunningTime="2023-11-27 23:55:10.252888798 +0000 UTC m=+17.297975477"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.545451    1258 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.587680    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-ztt77" podStartSLOduration=5.587643323 podCreationTimestamp="2023-11-27 23:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:55:10.254171743 +0000 UTC m=+17.299258439" watchObservedRunningTime="2023-11-27 23:55:10.587643323 +0000 UTC m=+17.632730020"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.587827    1258 topology_manager.go:215] "Topology Admit Handler" podUID="e59cdfcb-f7c6-4be9-a2e1-0931d582343c" podNamespace="kube-system" podName="storage-provisioner"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.591986    1258 topology_manager.go:215] "Topology Admit Handler" podUID="66ac3c18-9997-49aa-a154-ade69c138f12" podNamespace="kube-system" podName="coredns-5dd5756b68-9vws5"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.640695    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66ac3c18-9997-49aa-a154-ade69c138f12-config-volume\") pod \"coredns-5dd5756b68-9vws5\" (UID: \"66ac3c18-9997-49aa-a154-ade69c138f12\") " pod="kube-system/coredns-5dd5756b68-9vws5"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.640748    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e59cdfcb-f7c6-4be9-a2e1-0931d582343c-tmp\") pod \"storage-provisioner\" (UID: \"e59cdfcb-f7c6-4be9-a2e1-0931d582343c\") " pod="kube-system/storage-provisioner"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.640776    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjbft\" (UniqueName: \"kubernetes.io/projected/66ac3c18-9997-49aa-a154-ade69c138f12-kube-api-access-gjbft\") pod \"coredns-5dd5756b68-9vws5\" (UID: \"66ac3c18-9997-49aa-a154-ade69c138f12\") " pod="kube-system/coredns-5dd5756b68-9vws5"
	Nov 27 23:55:10 multinode-883509 kubelet[1258]: I1127 23:55:10.640797    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2jc\" (UniqueName: \"kubernetes.io/projected/e59cdfcb-f7c6-4be9-a2e1-0931d582343c-kube-api-access-4z2jc\") pod \"storage-provisioner\" (UID: \"e59cdfcb-f7c6-4be9-a2e1-0931d582343c\") " pod="kube-system/storage-provisioner"
	Nov 27 23:55:12 multinode-883509 kubelet[1258]: I1127 23:55:12.296981    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.296928253 podCreationTimestamp="2023-11-27 23:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:55:12.269465016 +0000 UTC m=+19.314551712" watchObservedRunningTime="2023-11-27 23:55:12.296928253 +0000 UTC m=+19.342014983"
	Nov 27 23:55:13 multinode-883509 kubelet[1258]: I1127 23:55:13.124986    1258 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9vws5" podStartSLOduration=8.124952614 podCreationTimestamp="2023-11-27 23:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-27 23:55:12.298391552 +0000 UTC m=+19.343478250" watchObservedRunningTime="2023-11-27 23:55:13.124952614 +0000 UTC m=+20.170039310"
	Nov 27 23:55:53 multinode-883509 kubelet[1258]: E1127 23:55:53.120333    1258 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 27 23:55:53 multinode-883509 kubelet[1258]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 27 23:55:53 multinode-883509 kubelet[1258]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 27 23:55:53 multinode-883509 kubelet[1258]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 27 23:55:58 multinode-883509 kubelet[1258]: I1127 23:55:58.864826    1258 topology_manager.go:215] "Topology Admit Handler" podUID="1d66953d-2cb8-45f7-a90b-c03b40f3fa0e" podNamespace="default" podName="busybox-5bc68d56bd-9qz8x"
	Nov 27 23:55:58 multinode-883509 kubelet[1258]: I1127 23:55:58.914638    1258 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69pvr\" (UniqueName: \"kubernetes.io/projected/1d66953d-2cb8-45f7-a90b-c03b40f3fa0e-kube-api-access-69pvr\") pod \"busybox-5bc68d56bd-9qz8x\" (UID: \"1d66953d-2cb8-45f7-a90b-c03b40f3fa0e\") " pod="default/busybox-5bc68d56bd-9qz8x"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-883509 -n multinode-883509
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-883509 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (780.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-883509
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-883509
E1127 23:58:50.988141   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-883509: exit status 82 (2m1.330208661s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-883509"  ...
	* Stopping node "multinode-883509"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-883509" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-883509 --wait=true -v=8 --alsologtostderr
E1128 00:00:27.680812   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 00:01:55.433270   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:03:50.987858   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 00:05:14.033961   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 00:05:27.680860   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 00:06:55.433302   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:08:18.477079   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:08:50.988224   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 00:10:27.680327   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-883509 --wait=true -v=8 --alsologtostderr: (10m56.319638284s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-883509
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-883509 -n multinode-883509
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-883509 logs -n 25: (1.544732085s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-883509 cp multinode-883509-m02:/home/docker/cp-test.txt                       | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2595831742/001/cp-test_multinode-883509-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-883509 cp multinode-883509-m02:/home/docker/cp-test.txt                       | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509:/home/docker/cp-test_multinode-883509-m02_multinode-883509.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n multinode-883509 sudo cat                                       | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | /home/docker/cp-test_multinode-883509-m02_multinode-883509.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-883509 cp multinode-883509-m02:/home/docker/cp-test.txt                       | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m03:/home/docker/cp-test_multinode-883509-m02_multinode-883509-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n multinode-883509-m03 sudo cat                                   | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | /home/docker/cp-test_multinode-883509-m02_multinode-883509-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-883509 cp testdata/cp-test.txt                                                | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-883509 cp multinode-883509-m03:/home/docker/cp-test.txt                       | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2595831742/001/cp-test_multinode-883509-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-883509 cp multinode-883509-m03:/home/docker/cp-test.txt                       | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509:/home/docker/cp-test_multinode-883509-m03_multinode-883509.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n multinode-883509 sudo cat                                       | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:57 UTC |
	|         | /home/docker/cp-test_multinode-883509-m03_multinode-883509.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-883509 cp multinode-883509-m03:/home/docker/cp-test.txt                       | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	|         | multinode-883509-m02:/home/docker/cp-test_multinode-883509-m03_multinode-883509-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	|         | multinode-883509-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n multinode-883509-m02 sudo cat                                   | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	|         | /home/docker/cp-test_multinode-883509-m03_multinode-883509-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-883509 node stop m03                                                          | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	| node    | multinode-883509 node start                                                             | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-883509                                                                | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC |                     |
	| stop    | -p multinode-883509                                                                     | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC |                     |
	| start   | -p multinode-883509                                                                     | multinode-883509 | jenkins | v1.32.0 | 27 Nov 23 23:59 UTC | 28 Nov 23 00:10 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-883509                                                                | multinode-883509 | jenkins | v1.32.0 | 28 Nov 23 00:10 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:59:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:59:35.208015   28506 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:59:35.208288   28506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:59:35.208298   28506 out.go:309] Setting ErrFile to fd 2...
	I1127 23:59:35.208302   28506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:59:35.208508   28506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1127 23:59:35.209155   28506 out.go:303] Setting JSON to false
	I1127 23:59:35.210033   28506 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2522,"bootTime":1701127053,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:59:35.210088   28506 start.go:138] virtualization: kvm guest
	I1127 23:59:35.212456   28506 out.go:177] * [multinode-883509] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:59:35.213753   28506 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:59:35.213754   28506 notify.go:220] Checking for updates...
	I1127 23:59:35.215136   28506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:59:35.216469   28506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:59:35.217707   28506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:59:35.219019   28506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:59:35.220251   28506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:59:35.222080   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:59:35.222193   28506 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:59:35.222870   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:59:35.222925   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:59:35.237057   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I1127 23:59:35.237566   28506 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:59:35.238107   28506 main.go:141] libmachine: Using API Version  1
	I1127 23:59:35.238124   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:59:35.238478   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:59:35.238642   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:59:35.272735   28506 out.go:177] * Using the kvm2 driver based on existing profile
	I1127 23:59:35.273785   28506 start.go:298] selected driver: kvm2
	I1127 23:59:35.273799   28506 start.go:902] validating driver "kvm2" against &{Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:59:35.273942   28506 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:59:35.274266   28506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:59:35.274372   28506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 23:59:35.288243   28506 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 23:59:35.288952   28506 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:59:35.289037   28506 cni.go:84] Creating CNI manager for ""
	I1127 23:59:35.289055   28506 cni.go:136] 3 nodes found, recommending kindnet
	I1127 23:59:35.289066   28506 start_flags.go:323] config:
	{Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:59:35.289336   28506 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:59:35.291834   28506 out.go:177] * Starting control plane node multinode-883509 in cluster multinode-883509
	I1127 23:59:35.293232   28506 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:59:35.293271   28506 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:59:35.293280   28506 cache.go:56] Caching tarball of preloaded images
	I1127 23:59:35.293359   28506 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1127 23:59:35.293370   28506 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1127 23:59:35.293505   28506 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1127 23:59:35.293709   28506 start.go:365] acquiring machines lock for multinode-883509: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1127 23:59:35.293762   28506 start.go:369] acquired machines lock for "multinode-883509" in 32.343µs
	I1127 23:59:35.293786   28506 start.go:96] Skipping create...Using existing machine configuration
	I1127 23:59:35.293797   28506 fix.go:54] fixHost starting: 
	I1127 23:59:35.294065   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:59:35.294104   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:59:35.308193   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I1127 23:59:35.308620   28506 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:59:35.309164   28506 main.go:141] libmachine: Using API Version  1
	I1127 23:59:35.309188   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:59:35.309536   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:59:35.309711   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:59:35.309898   28506 main.go:141] libmachine: (multinode-883509) Calling .GetState
	I1127 23:59:35.311591   28506 fix.go:102] recreateIfNeeded on multinode-883509: state=Running err=<nil>
	W1127 23:59:35.311607   28506 fix.go:128] unexpected machine state, will restart: <nil>
	I1127 23:59:35.314433   28506 out.go:177] * Updating the running kvm2 "multinode-883509" VM ...
	I1127 23:59:35.315838   28506 machine.go:88] provisioning docker machine ...
	I1127 23:59:35.315864   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:59:35.316138   28506 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1127 23:59:35.316286   28506 buildroot.go:166] provisioning hostname "multinode-883509"
	I1127 23:59:35.316300   28506 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1127 23:59:35.316431   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:59:35.319046   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:59:35.319539   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:59:35.319566   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:59:35.319707   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:59:35.319875   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:59:35.320000   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:59:35.320142   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:59:35.320285   28506 main.go:141] libmachine: Using SSH client type: native
	I1127 23:59:35.320669   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1127 23:59:35.320687   28506 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-883509 && echo "multinode-883509" | sudo tee /etc/hostname
	I1127 23:59:53.704978   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1127 23:59:59.785115   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:02.857091   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:08.937089   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:12.009044   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:18.089022   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:21.161072   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:27.241010   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:30.312962   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:36.393066   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:39.465036   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:45.545030   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:48.616985   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:54.697080   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:00:57.769161   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:03.849072   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:06.921006   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:13.001163   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:16.073027   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:22.153137   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:25.225025   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:31.305018   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:34.377101   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:40.457133   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:43.529056   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:49.609050   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:52.681084   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:01:58.761010   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:01.833020   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:07.913024   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:10.985061   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:17.065000   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:20.136997   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:26.217014   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:29.289031   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:35.368975   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:38.441008   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:44.520991   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:47.592999   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:53.673025   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:02:56.744958   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:02.825048   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:05.896994   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:11.977064   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:15.048998   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:21.129056   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:24.201013   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:30.280942   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:33.352989   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:39.433047   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:42.505111   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:48.585027   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:51.657114   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:03:57.737093   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:04:00.809025   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:04:06.889051   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:04:09.961028   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:04:16.041048   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:04:19.113062   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:04:25.193102   28506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.159:22: connect: no route to host
	I1128 00:04:28.194833   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:04:28.194877   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:28.196739   28506 machine.go:91] provisioned docker machine in 4m52.880884801s
	I1128 00:04:28.196797   28506 fix.go:56] fixHost completed within 4m52.903000827s
	I1128 00:04:28.196807   28506 start.go:83] releasing machines lock for "multinode-883509", held for 4m52.903033047s
	W1128 00:04:28.196828   28506 start.go:691] error starting host: provision: host is not running
	W1128 00:04:28.196953   28506 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 00:04:28.196965   28506 start.go:706] Will try again in 5 seconds ...
	I1128 00:04:33.199809   28506 start.go:365] acquiring machines lock for multinode-883509: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:04:33.199930   28506 start.go:369] acquired machines lock for "multinode-883509" in 82.309µs
	I1128 00:04:33.200001   28506 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:04:33.200011   28506 fix.go:54] fixHost starting: 
	I1128 00:04:33.200304   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:04:33.200331   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:04:33.214579   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39159
	I1128 00:04:33.214951   28506 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:04:33.215360   28506 main.go:141] libmachine: Using API Version  1
	I1128 00:04:33.215378   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:04:33.215765   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:04:33.215959   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:04:33.216111   28506 main.go:141] libmachine: (multinode-883509) Calling .GetState
	I1128 00:04:33.217692   28506 fix.go:102] recreateIfNeeded on multinode-883509: state=Stopped err=<nil>
	I1128 00:04:33.217723   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	W1128 00:04:33.217889   28506 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:04:33.219778   28506 out.go:177] * Restarting existing kvm2 VM for "multinode-883509" ...
	I1128 00:04:33.221081   28506 main.go:141] libmachine: (multinode-883509) Calling .Start
	I1128 00:04:33.221216   28506 main.go:141] libmachine: (multinode-883509) Ensuring networks are active...
	I1128 00:04:33.222011   28506 main.go:141] libmachine: (multinode-883509) Ensuring network default is active
	I1128 00:04:33.222385   28506 main.go:141] libmachine: (multinode-883509) Ensuring network mk-multinode-883509 is active
	I1128 00:04:33.222680   28506 main.go:141] libmachine: (multinode-883509) Getting domain xml...
	I1128 00:04:33.223337   28506 main.go:141] libmachine: (multinode-883509) Creating domain...
	I1128 00:04:34.438121   28506 main.go:141] libmachine: (multinode-883509) Waiting to get IP...
	I1128 00:04:34.439192   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:34.439694   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:34.439740   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:34.439662   29293 retry.go:31] will retry after 210.151388ms: waiting for machine to come up
	I1128 00:04:34.651138   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:34.651598   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:34.651617   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:34.651567   29293 retry.go:31] will retry after 240.503279ms: waiting for machine to come up
	I1128 00:04:34.894076   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:34.894582   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:34.894610   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:34.894533   29293 retry.go:31] will retry after 418.140771ms: waiting for machine to come up
	I1128 00:04:35.313814   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:35.314331   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:35.314364   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:35.314267   29293 retry.go:31] will retry after 585.887802ms: waiting for machine to come up
	I1128 00:04:35.901914   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:35.902332   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:35.902360   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:35.902295   29293 retry.go:31] will retry after 607.01458ms: waiting for machine to come up
	I1128 00:04:36.511012   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:36.511459   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:36.511482   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:36.511444   29293 retry.go:31] will retry after 681.798983ms: waiting for machine to come up
	I1128 00:04:37.195190   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:37.195616   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:37.195649   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:37.195584   29293 retry.go:31] will retry after 985.718633ms: waiting for machine to come up
	I1128 00:04:38.182572   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:38.183075   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:38.183096   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:38.183055   29293 retry.go:31] will retry after 1.080254388s: waiting for machine to come up
	I1128 00:04:39.264706   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:39.265312   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:39.265330   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:39.265261   29293 retry.go:31] will retry after 1.260567692s: waiting for machine to come up
	I1128 00:04:40.526925   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:40.527387   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:40.527414   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:40.527338   29293 retry.go:31] will retry after 1.717959345s: waiting for machine to come up
	I1128 00:04:42.247272   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:42.247761   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:42.247802   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:42.247699   29293 retry.go:31] will retry after 1.967711267s: waiting for machine to come up
	I1128 00:04:44.216737   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:44.217253   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:44.217287   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:44.217179   29293 retry.go:31] will retry after 3.012656699s: waiting for machine to come up
	I1128 00:04:47.233306   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:47.233725   28506 main.go:141] libmachine: (multinode-883509) DBG | unable to find current IP address of domain multinode-883509 in network mk-multinode-883509
	I1128 00:04:47.233752   28506 main.go:141] libmachine: (multinode-883509) DBG | I1128 00:04:47.233676   29293 retry.go:31] will retry after 3.624748214s: waiting for machine to come up
	I1128 00:04:50.861765   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:50.862230   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has current primary IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:50.862247   28506 main.go:141] libmachine: (multinode-883509) Found IP for machine: 192.168.39.159
	I1128 00:04:50.862261   28506 main.go:141] libmachine: (multinode-883509) Reserving static IP address...
	I1128 00:04:50.862770   28506 main.go:141] libmachine: (multinode-883509) Reserved static IP address: 192.168.39.159
	I1128 00:04:50.862793   28506 main.go:141] libmachine: (multinode-883509) Waiting for SSH to be available...
	I1128 00:04:50.862814   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "multinode-883509", mac: "52:54:00:e1:08:02", ip: "192.168.39.159"} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:50.862863   28506 main.go:141] libmachine: (multinode-883509) DBG | skip adding static IP to network mk-multinode-883509 - found existing host DHCP lease matching {name: "multinode-883509", mac: "52:54:00:e1:08:02", ip: "192.168.39.159"}
	I1128 00:04:50.862885   28506 main.go:141] libmachine: (multinode-883509) DBG | Getting to WaitForSSH function...
	I1128 00:04:50.864933   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:50.865393   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:50.865421   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:50.865543   28506 main.go:141] libmachine: (multinode-883509) DBG | Using SSH client type: external
	I1128 00:04:50.865566   28506 main.go:141] libmachine: (multinode-883509) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa (-rw-------)
	I1128 00:04:50.865599   28506 main.go:141] libmachine: (multinode-883509) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:04:50.865625   28506 main.go:141] libmachine: (multinode-883509) DBG | About to run SSH command:
	I1128 00:04:50.865644   28506 main.go:141] libmachine: (multinode-883509) DBG | exit 0
	I1128 00:04:50.956222   28506 main.go:141] libmachine: (multinode-883509) DBG | SSH cmd err, output: <nil>: 
	I1128 00:04:50.956590   28506 main.go:141] libmachine: (multinode-883509) Calling .GetConfigRaw
	I1128 00:04:50.957223   28506 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1128 00:04:50.959453   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:50.959819   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:50.959853   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:50.960159   28506 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1128 00:04:50.960337   28506 machine.go:88] provisioning docker machine ...
	I1128 00:04:50.960353   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:04:50.960560   28506 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1128 00:04:50.960698   28506 buildroot.go:166] provisioning hostname "multinode-883509"
	I1128 00:04:50.960716   28506 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1128 00:04:50.960853   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:50.962853   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:50.963174   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:50.963210   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:50.963312   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:04:50.963487   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:50.963639   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:50.963778   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:04:50.963911   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:04:50.964311   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1128 00:04:50.964325   28506 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-883509 && echo "multinode-883509" | sudo tee /etc/hostname
	I1128 00:04:51.099983   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-883509
	
	I1128 00:04:51.100018   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:51.102890   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.103338   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:51.103370   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.103477   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:04:51.103703   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:51.103886   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:51.104032   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:04:51.104176   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:04:51.104528   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1128 00:04:51.104547   28506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-883509' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-883509/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-883509' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:04:51.237222   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:04:51.237248   28506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:04:51.237274   28506 buildroot.go:174] setting up certificates
	I1128 00:04:51.237284   28506 provision.go:83] configureAuth start
	I1128 00:04:51.237291   28506 main.go:141] libmachine: (multinode-883509) Calling .GetMachineName
	I1128 00:04:51.237555   28506 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1128 00:04:51.239997   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.240354   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:51.240382   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.240516   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:51.242663   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.243055   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:51.243080   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.243230   28506 provision.go:138] copyHostCerts
	I1128 00:04:51.243262   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:04:51.243301   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:04:51.243313   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:04:51.243394   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:04:51.243508   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:04:51.243535   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:04:51.243543   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:04:51.243582   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:04:51.243655   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:04:51.243678   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:04:51.243685   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:04:51.243723   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:04:51.243842   28506 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.multinode-883509 san=[192.168.39.159 192.168.39.159 localhost 127.0.0.1 minikube multinode-883509]
	I1128 00:04:51.316814   28506 provision.go:172] copyRemoteCerts
	I1128 00:04:51.316906   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:04:51.316939   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:51.319326   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.319632   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:51.319670   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.319876   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:04:51.320045   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:51.320210   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:04:51.320350   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1128 00:04:51.409860   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 00:04:51.409943   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:04:51.436673   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 00:04:51.436729   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1128 00:04:51.463246   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 00:04:51.463303   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:04:51.489647   28506 provision.go:86] duration metric: configureAuth took 252.349402ms
	I1128 00:04:51.489679   28506 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:04:51.489874   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:04:51.489939   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:51.493058   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.493489   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:51.493523   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.493690   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:04:51.493872   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:51.494035   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:51.494173   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:04:51.494296   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:04:51.494621   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1128 00:04:51.494643   28506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:04:51.806105   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:04:51.806139   28506 machine.go:91] provisioned docker machine in 845.788587ms
	I1128 00:04:51.806153   28506 start.go:300] post-start starting for "multinode-883509" (driver="kvm2")
	I1128 00:04:51.806165   28506 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:04:51.806184   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:04:51.806491   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:04:51.806521   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:51.809207   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.809639   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:51.809684   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.809804   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:04:51.809995   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:51.810160   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:04:51.810294   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1128 00:04:51.903585   28506 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:04:51.907710   28506 command_runner.go:130] > NAME=Buildroot
	I1128 00:04:51.907729   28506 command_runner.go:130] > VERSION=2021.02.12-1-g8be4f20-dirty
	I1128 00:04:51.907733   28506 command_runner.go:130] > ID=buildroot
	I1128 00:04:51.907741   28506 command_runner.go:130] > VERSION_ID=2021.02.12
	I1128 00:04:51.907749   28506 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1128 00:04:51.907790   28506 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:04:51.907806   28506 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:04:51.907882   28506 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:04:51.907976   28506 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:04:51.907989   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /etc/ssl/certs/119302.pem
	I1128 00:04:51.908089   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:04:51.917272   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:04:51.939498   28506 start.go:303] post-start completed in 133.320444ms
	I1128 00:04:51.939524   28506 fix.go:56] fixHost completed within 18.739513825s
	I1128 00:04:51.939543   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:51.942149   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.942479   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:51.942510   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:51.942660   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:04:51.942855   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:51.943050   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:51.943235   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:04:51.943439   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:04:51.943740   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.159 22 <nil> <nil>}
	I1128 00:04:51.943750   28506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:04:52.069316   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701129892.020308240
	
	I1128 00:04:52.069339   28506 fix.go:206] guest clock: 1701129892.020308240
	I1128 00:04:52.069349   28506 fix.go:219] Guest: 2023-11-28 00:04:52.02030824 +0000 UTC Remote: 2023-11-28 00:04:51.939527255 +0000 UTC m=+316.778734969 (delta=80.780985ms)
	I1128 00:04:52.069365   28506 fix.go:190] guest clock delta is within tolerance: 80.780985ms
	I1128 00:04:52.069370   28506 start.go:83] releasing machines lock for "multinode-883509", held for 18.86941493s
	I1128 00:04:52.069386   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:04:52.069617   28506 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1128 00:04:52.072261   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:52.072693   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:52.072720   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:52.072898   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:04:52.073356   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:04:52.073517   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:04:52.073563   28506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:04:52.073635   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:52.073711   28506 ssh_runner.go:195] Run: cat /version.json
	I1128 00:04:52.073747   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:04:52.076555   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:52.076581   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:52.076943   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:52.076971   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:52.077000   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:52.077030   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:52.077194   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:04:52.077344   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:04:52.077399   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:52.077505   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:04:52.077657   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:04:52.077662   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:04:52.077792   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1128 00:04:52.077841   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1128 00:04:52.175231   28506 command_runner.go:130] > {"iso_version": "v1.32.1-1701107474-17206", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "bcc467dd5a1a124d966bcc72a040bb167e304544"}
	I1128 00:04:52.198114   28506 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 00:04:52.199152   28506 ssh_runner.go:195] Run: systemctl --version
	I1128 00:04:52.204426   28506 command_runner.go:130] > systemd 247 (247)
	I1128 00:04:52.204464   28506 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1128 00:04:52.204622   28506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:04:52.342995   28506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 00:04:52.349303   28506 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1128 00:04:52.349744   28506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:04:52.349815   28506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:04:52.364227   28506 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1128 00:04:52.364463   28506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:04:52.364483   28506 start.go:472] detecting cgroup driver to use...
	I1128 00:04:52.364555   28506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:04:52.377334   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:04:52.389131   28506 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:04:52.389188   28506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:04:52.402572   28506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:04:52.414862   28506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:04:52.522887   28506 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1128 00:04:52.522963   28506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:04:52.537779   28506 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1128 00:04:52.641940   28506 docker.go:219] disabling docker service ...
	I1128 00:04:52.642017   28506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:04:52.655941   28506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:04:52.668052   28506 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1128 00:04:52.668521   28506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:04:52.681976   28506 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1128 00:04:52.772998   28506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:04:52.869764   28506 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1128 00:04:52.869796   28506 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1128 00:04:52.869865   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:04:52.882481   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:04:52.899072   28506 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 00:04:52.899119   28506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:04:52.899173   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:04:52.909199   28506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:04:52.909278   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:04:52.918680   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:04:52.928207   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:04:52.937673   28506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:04:52.947668   28506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:04:52.957634   28506 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:04:52.957678   28506 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:04:52.957730   28506 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:04:52.972268   28506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:04:52.981693   28506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:04:53.081436   28506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:04:53.248226   28506 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:04:53.248307   28506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:04:53.253293   28506 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 00:04:53.253314   28506 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 00:04:53.253324   28506 command_runner.go:130] > Device: 16h/22d	Inode: 731         Links: 1
	I1128 00:04:53.253334   28506 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 00:04:53.253341   28506 command_runner.go:130] > Access: 2023-11-28 00:04:53.184158983 +0000
	I1128 00:04:53.253349   28506 command_runner.go:130] > Modify: 2023-11-28 00:04:53.184158983 +0000
	I1128 00:04:53.253359   28506 command_runner.go:130] > Change: 2023-11-28 00:04:53.184158983 +0000
	I1128 00:04:53.253365   28506 command_runner.go:130] >  Birth: -
	I1128 00:04:53.253506   28506 start.go:540] Will wait 60s for crictl version
	I1128 00:04:53.253562   28506 ssh_runner.go:195] Run: which crictl
	I1128 00:04:53.257669   28506 command_runner.go:130] > /usr/bin/crictl
	I1128 00:04:53.257787   28506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:04:53.300768   28506 command_runner.go:130] > Version:  0.1.0
	I1128 00:04:53.300795   28506 command_runner.go:130] > RuntimeName:  cri-o
	I1128 00:04:53.300802   28506 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1128 00:04:53.300810   28506 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 00:04:53.302446   28506 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:04:53.302534   28506 ssh_runner.go:195] Run: crio --version
	I1128 00:04:53.351990   28506 command_runner.go:130] > crio version 1.24.1
	I1128 00:04:53.352017   28506 command_runner.go:130] > Version:          1.24.1
	I1128 00:04:53.352028   28506 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 00:04:53.352036   28506 command_runner.go:130] > GitTreeState:     dirty
	I1128 00:04:53.352044   28506 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1128 00:04:53.352050   28506 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 00:04:53.352056   28506 command_runner.go:130] > Compiler:         gc
	I1128 00:04:53.352062   28506 command_runner.go:130] > Platform:         linux/amd64
	I1128 00:04:53.352070   28506 command_runner.go:130] > Linkmode:         dynamic
	I1128 00:04:53.352081   28506 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 00:04:53.352089   28506 command_runner.go:130] > SeccompEnabled:   true
	I1128 00:04:53.352095   28506 command_runner.go:130] > AppArmorEnabled:  false
	I1128 00:04:53.353458   28506 ssh_runner.go:195] Run: crio --version
	I1128 00:04:53.402045   28506 command_runner.go:130] > crio version 1.24.1
	I1128 00:04:53.402072   28506 command_runner.go:130] > Version:          1.24.1
	I1128 00:04:53.402082   28506 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 00:04:53.402088   28506 command_runner.go:130] > GitTreeState:     dirty
	I1128 00:04:53.402097   28506 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1128 00:04:53.402104   28506 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 00:04:53.402109   28506 command_runner.go:130] > Compiler:         gc
	I1128 00:04:53.402116   28506 command_runner.go:130] > Platform:         linux/amd64
	I1128 00:04:53.402124   28506 command_runner.go:130] > Linkmode:         dynamic
	I1128 00:04:53.402135   28506 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 00:04:53.402157   28506 command_runner.go:130] > SeccompEnabled:   true
	I1128 00:04:53.402172   28506 command_runner.go:130] > AppArmorEnabled:  false
	I1128 00:04:53.406655   28506 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:04:53.407997   28506 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1128 00:04:53.410643   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:53.411056   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:04:53.411087   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:04:53.411257   28506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:04:53.415607   28506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:04:53.428297   28506 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:04:53.428373   28506 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:04:53.472773   28506 command_runner.go:130] > {
	I1128 00:04:53.472799   28506 command_runner.go:130] >   "images": [
	I1128 00:04:53.472804   28506 command_runner.go:130] >     {
	I1128 00:04:53.472811   28506 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1128 00:04:53.472816   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:53.472822   28506 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1128 00:04:53.472826   28506 command_runner.go:130] >       ],
	I1128 00:04:53.472839   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:53.472852   28506 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1128 00:04:53.472868   28506 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1128 00:04:53.472902   28506 command_runner.go:130] >       ],
	I1128 00:04:53.472910   28506 command_runner.go:130] >       "size": "750414",
	I1128 00:04:53.472921   28506 command_runner.go:130] >       "uid": {
	I1128 00:04:53.472930   28506 command_runner.go:130] >         "value": "65535"
	I1128 00:04:53.472947   28506 command_runner.go:130] >       },
	I1128 00:04:53.472962   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:53.472977   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:53.472987   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:53.472998   28506 command_runner.go:130] >     }
	I1128 00:04:53.473010   28506 command_runner.go:130] >   ]
	I1128 00:04:53.473017   28506 command_runner.go:130] > }
	I1128 00:04:53.473144   28506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:04:53.473203   28506 ssh_runner.go:195] Run: which lz4
	I1128 00:04:53.477200   28506 command_runner.go:130] > /usr/bin/lz4
	I1128 00:04:53.477230   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1128 00:04:53.477318   28506 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:04:53.481690   28506 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:04:53.481728   28506 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:04:53.481752   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:04:55.291965   28506 crio.go:444] Took 1.814680 seconds to copy over tarball
	I1128 00:04:55.292036   28506 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:04:58.215634   28506 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.923567493s)
	I1128 00:04:58.215678   28506 crio.go:451] Took 2.923674 seconds to extract the tarball
	I1128 00:04:58.215689   28506 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:04:58.257351   28506 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:04:58.305185   28506 command_runner.go:130] > {
	I1128 00:04:58.305209   28506 command_runner.go:130] >   "images": [
	I1128 00:04:58.305215   28506 command_runner.go:130] >     {
	I1128 00:04:58.305223   28506 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1128 00:04:58.305228   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.305234   28506 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1128 00:04:58.305238   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305242   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.305265   28506 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1128 00:04:58.305282   28506 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1128 00:04:58.305289   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305299   28506 command_runner.go:130] >       "size": "65258016",
	I1128 00:04:58.305306   28506 command_runner.go:130] >       "uid": null,
	I1128 00:04:58.305310   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.305317   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.305321   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.305328   28506 command_runner.go:130] >     },
	I1128 00:04:58.305332   28506 command_runner.go:130] >     {
	I1128 00:04:58.305338   28506 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1128 00:04:58.305348   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.305358   28506 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1128 00:04:58.305368   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305375   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.305390   28506 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1128 00:04:58.305405   28506 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1128 00:04:58.305412   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305417   28506 command_runner.go:130] >       "size": "31470524",
	I1128 00:04:58.305423   28506 command_runner.go:130] >       "uid": null,
	I1128 00:04:58.305431   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.305441   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.305448   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.305454   28506 command_runner.go:130] >     },
	I1128 00:04:58.305464   28506 command_runner.go:130] >     {
	I1128 00:04:58.305475   28506 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1128 00:04:58.305485   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.305496   28506 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1128 00:04:58.305503   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305508   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.305523   28506 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1128 00:04:58.305539   28506 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1128 00:04:58.305549   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305556   28506 command_runner.go:130] >       "size": "53621675",
	I1128 00:04:58.305566   28506 command_runner.go:130] >       "uid": null,
	I1128 00:04:58.305574   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.305583   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.305589   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.305595   28506 command_runner.go:130] >     },
	I1128 00:04:58.305606   28506 command_runner.go:130] >     {
	I1128 00:04:58.305621   28506 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1128 00:04:58.305631   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.305643   28506 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1128 00:04:58.305652   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305659   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.305673   28506 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1128 00:04:58.305683   28506 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1128 00:04:58.305697   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305709   28506 command_runner.go:130] >       "size": "295456551",
	I1128 00:04:58.305719   28506 command_runner.go:130] >       "uid": {
	I1128 00:04:58.305726   28506 command_runner.go:130] >         "value": "0"
	I1128 00:04:58.305735   28506 command_runner.go:130] >       },
	I1128 00:04:58.305742   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.305751   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.305758   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.305764   28506 command_runner.go:130] >     },
	I1128 00:04:58.305768   28506 command_runner.go:130] >     {
	I1128 00:04:58.305778   28506 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1128 00:04:58.305789   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.305802   28506 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1128 00:04:58.305811   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305818   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.305832   28506 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1128 00:04:58.305846   28506 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1128 00:04:58.305853   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305860   28506 command_runner.go:130] >       "size": "127226832",
	I1128 00:04:58.305872   28506 command_runner.go:130] >       "uid": {
	I1128 00:04:58.305883   28506 command_runner.go:130] >         "value": "0"
	I1128 00:04:58.305892   28506 command_runner.go:130] >       },
	I1128 00:04:58.305899   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.305909   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.305922   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.305930   28506 command_runner.go:130] >     },
	I1128 00:04:58.305934   28506 command_runner.go:130] >     {
	I1128 00:04:58.305941   28506 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1128 00:04:58.305951   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.305962   28506 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1128 00:04:58.305972   28506 command_runner.go:130] >       ],
	I1128 00:04:58.305983   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.305999   28506 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1128 00:04:58.306015   28506 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1128 00:04:58.306022   28506 command_runner.go:130] >       ],
	I1128 00:04:58.306027   28506 command_runner.go:130] >       "size": "123261750",
	I1128 00:04:58.306033   28506 command_runner.go:130] >       "uid": {
	I1128 00:04:58.306044   28506 command_runner.go:130] >         "value": "0"
	I1128 00:04:58.306051   28506 command_runner.go:130] >       },
	I1128 00:04:58.306061   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.306068   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.306078   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.306086   28506 command_runner.go:130] >     },
	I1128 00:04:58.306094   28506 command_runner.go:130] >     {
	I1128 00:04:58.306105   28506 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1128 00:04:58.306112   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.306120   28506 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1128 00:04:58.306130   28506 command_runner.go:130] >       ],
	I1128 00:04:58.306138   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.306156   28506 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1128 00:04:58.306171   28506 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1128 00:04:58.306180   28506 command_runner.go:130] >       ],
	I1128 00:04:58.306187   28506 command_runner.go:130] >       "size": "74749335",
	I1128 00:04:58.306194   28506 command_runner.go:130] >       "uid": null,
	I1128 00:04:58.306199   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.306209   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.306220   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.306230   28506 command_runner.go:130] >     },
	I1128 00:04:58.306236   28506 command_runner.go:130] >     {
	I1128 00:04:58.306248   28506 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1128 00:04:58.306258   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.306266   28506 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1128 00:04:58.306274   28506 command_runner.go:130] >       ],
	I1128 00:04:58.306279   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.306326   28506 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1128 00:04:58.306341   28506 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1128 00:04:58.306348   28506 command_runner.go:130] >       ],
	I1128 00:04:58.306355   28506 command_runner.go:130] >       "size": "61551410",
	I1128 00:04:58.306361   28506 command_runner.go:130] >       "uid": {
	I1128 00:04:58.306368   28506 command_runner.go:130] >         "value": "0"
	I1128 00:04:58.306374   28506 command_runner.go:130] >       },
	I1128 00:04:58.306380   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.306390   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.306395   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.306401   28506 command_runner.go:130] >     },
	I1128 00:04:58.306411   28506 command_runner.go:130] >     {
	I1128 00:04:58.306421   28506 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1128 00:04:58.306431   28506 command_runner.go:130] >       "repoTags": [
	I1128 00:04:58.306443   28506 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1128 00:04:58.306449   28506 command_runner.go:130] >       ],
	I1128 00:04:58.306456   28506 command_runner.go:130] >       "repoDigests": [
	I1128 00:04:58.306467   28506 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1128 00:04:58.306478   28506 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1128 00:04:58.306486   28506 command_runner.go:130] >       ],
	I1128 00:04:58.306493   28506 command_runner.go:130] >       "size": "750414",
	I1128 00:04:58.306499   28506 command_runner.go:130] >       "uid": {
	I1128 00:04:58.306506   28506 command_runner.go:130] >         "value": "65535"
	I1128 00:04:58.306512   28506 command_runner.go:130] >       },
	I1128 00:04:58.306517   28506 command_runner.go:130] >       "username": "",
	I1128 00:04:58.306528   28506 command_runner.go:130] >       "spec": null,
	I1128 00:04:58.306534   28506 command_runner.go:130] >       "pinned": false
	I1128 00:04:58.306550   28506 command_runner.go:130] >     }
	I1128 00:04:58.306560   28506 command_runner.go:130] >   ]
	I1128 00:04:58.306565   28506 command_runner.go:130] > }
	I1128 00:04:58.306722   28506 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:04:58.306737   28506 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:04:58.306810   28506 ssh_runner.go:195] Run: crio config
	I1128 00:04:58.360118   28506 command_runner.go:130] ! time="2023-11-28 00:04:58.310531066Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1128 00:04:58.360151   28506 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 00:04:58.365106   28506 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 00:04:58.365124   28506 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 00:04:58.365130   28506 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 00:04:58.365134   28506 command_runner.go:130] > #
	I1128 00:04:58.365141   28506 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 00:04:58.365147   28506 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 00:04:58.365153   28506 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 00:04:58.365164   28506 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 00:04:58.365169   28506 command_runner.go:130] > # reload'.
	I1128 00:04:58.365175   28506 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 00:04:58.365181   28506 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 00:04:58.365188   28506 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 00:04:58.365194   28506 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 00:04:58.365199   28506 command_runner.go:130] > [crio]
	I1128 00:04:58.365204   28506 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 00:04:58.365210   28506 command_runner.go:130] > # containers images, in this directory.
	I1128 00:04:58.365215   28506 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1128 00:04:58.365225   28506 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 00:04:58.365234   28506 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1128 00:04:58.365242   28506 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 00:04:58.365256   28506 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 00:04:58.365264   28506 command_runner.go:130] > storage_driver = "overlay"
	I1128 00:04:58.365273   28506 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 00:04:58.365282   28506 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 00:04:58.365290   28506 command_runner.go:130] > storage_option = [
	I1128 00:04:58.365295   28506 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1128 00:04:58.365300   28506 command_runner.go:130] > ]
	I1128 00:04:58.365307   28506 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 00:04:58.365315   28506 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 00:04:58.365324   28506 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 00:04:58.365332   28506 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 00:04:58.365338   28506 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 00:04:58.365350   28506 command_runner.go:130] > # always happen on a node reboot
	I1128 00:04:58.365363   28506 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 00:04:58.365375   28506 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 00:04:58.365384   28506 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 00:04:58.365394   28506 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 00:04:58.365406   28506 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 00:04:58.365416   28506 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 00:04:58.365426   28506 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 00:04:58.365433   28506 command_runner.go:130] > # internal_wipe = true
	I1128 00:04:58.365448   28506 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 00:04:58.365459   28506 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 00:04:58.365469   28506 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 00:04:58.365478   28506 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 00:04:58.365484   28506 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 00:04:58.365490   28506 command_runner.go:130] > [crio.api]
	I1128 00:04:58.365496   28506 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 00:04:58.365503   28506 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 00:04:58.365508   28506 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 00:04:58.365516   28506 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 00:04:58.365528   28506 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 00:04:58.365541   28506 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 00:04:58.365551   28506 command_runner.go:130] > # stream_port = "0"
	I1128 00:04:58.365560   28506 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 00:04:58.365570   28506 command_runner.go:130] > # stream_enable_tls = false
	I1128 00:04:58.365581   28506 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 00:04:58.365588   28506 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 00:04:58.365594   28506 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 00:04:58.365605   28506 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 00:04:58.365615   28506 command_runner.go:130] > # minutes.
	I1128 00:04:58.365623   28506 command_runner.go:130] > # stream_tls_cert = ""
	I1128 00:04:58.365638   28506 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 00:04:58.365651   28506 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 00:04:58.365661   28506 command_runner.go:130] > # stream_tls_key = ""
	I1128 00:04:58.365672   28506 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 00:04:58.365682   28506 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 00:04:58.365689   28506 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 00:04:58.365699   28506 command_runner.go:130] > # stream_tls_ca = ""
	I1128 00:04:58.365715   28506 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 00:04:58.365726   28506 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1128 00:04:58.365738   28506 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 00:04:58.365749   28506 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1128 00:04:58.365767   28506 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 00:04:58.365782   28506 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 00:04:58.365789   28506 command_runner.go:130] > [crio.runtime]
	I1128 00:04:58.365803   28506 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 00:04:58.365816   28506 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 00:04:58.365826   28506 command_runner.go:130] > # "nofile=1024:2048"
	I1128 00:04:58.365852   28506 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 00:04:58.365867   28506 command_runner.go:130] > # default_ulimits = [
	I1128 00:04:58.365873   28506 command_runner.go:130] > # ]
	I1128 00:04:58.365884   28506 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 00:04:58.365894   28506 command_runner.go:130] > # no_pivot = false
	I1128 00:04:58.365906   28506 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 00:04:58.365919   28506 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 00:04:58.365931   28506 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 00:04:58.365938   28506 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 00:04:58.365945   28506 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 00:04:58.365957   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 00:04:58.365969   28506 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1128 00:04:58.365980   28506 command_runner.go:130] > # Cgroup setting for conmon
	I1128 00:04:58.365992   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 00:04:58.366002   28506 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 00:04:58.366015   28506 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 00:04:58.366025   28506 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 00:04:58.366032   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 00:04:58.366041   28506 command_runner.go:130] > conmon_env = [
	I1128 00:04:58.366052   28506 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1128 00:04:58.366061   28506 command_runner.go:130] > ]
	I1128 00:04:58.366070   28506 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 00:04:58.366087   28506 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 00:04:58.366099   28506 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 00:04:58.366108   28506 command_runner.go:130] > # default_env = [
	I1128 00:04:58.366112   28506 command_runner.go:130] > # ]
	I1128 00:04:58.366119   28506 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 00:04:58.366128   28506 command_runner.go:130] > # selinux = false
	I1128 00:04:58.366140   28506 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 00:04:58.366154   28506 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 00:04:58.366166   28506 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 00:04:58.366176   28506 command_runner.go:130] > # seccomp_profile = ""
	I1128 00:04:58.366187   28506 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 00:04:58.366264   28506 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 00:04:58.366289   28506 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 00:04:58.366298   28506 command_runner.go:130] > # which might increase security.
	I1128 00:04:58.366309   28506 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1128 00:04:58.366321   28506 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 00:04:58.366336   28506 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 00:04:58.366350   28506 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 00:04:58.366370   28506 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 00:04:58.366388   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:04:58.366395   28506 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 00:04:58.366404   28506 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 00:04:58.366412   28506 command_runner.go:130] > # the cgroup blockio controller.
	I1128 00:04:58.366423   28506 command_runner.go:130] > # blockio_config_file = ""
	I1128 00:04:58.366437   28506 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 00:04:58.366448   28506 command_runner.go:130] > # irqbalance daemon.
	I1128 00:04:58.366460   28506 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 00:04:58.366478   28506 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 00:04:58.366489   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:04:58.366499   28506 command_runner.go:130] > # rdt_config_file = ""
	I1128 00:04:58.366509   28506 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 00:04:58.366520   28506 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 00:04:58.366533   28506 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 00:04:58.366544   28506 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 00:04:58.366557   28506 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 00:04:58.366566   28506 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 00:04:58.366575   28506 command_runner.go:130] > # will be added.
	I1128 00:04:58.366587   28506 command_runner.go:130] > # default_capabilities = [
	I1128 00:04:58.366595   28506 command_runner.go:130] > # 	"CHOWN",
	I1128 00:04:58.366602   28506 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 00:04:58.366612   28506 command_runner.go:130] > # 	"FSETID",
	I1128 00:04:58.366618   28506 command_runner.go:130] > # 	"FOWNER",
	I1128 00:04:58.366628   28506 command_runner.go:130] > # 	"SETGID",
	I1128 00:04:58.366634   28506 command_runner.go:130] > # 	"SETUID",
	I1128 00:04:58.366643   28506 command_runner.go:130] > # 	"SETPCAP",
	I1128 00:04:58.366648   28506 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 00:04:58.366652   28506 command_runner.go:130] > # 	"KILL",
	I1128 00:04:58.366659   28506 command_runner.go:130] > # ]
	I1128 00:04:58.366674   28506 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 00:04:58.366688   28506 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 00:04:58.366698   28506 command_runner.go:130] > # default_sysctls = [
	I1128 00:04:58.366703   28506 command_runner.go:130] > # ]
	I1128 00:04:58.366713   28506 command_runner.go:130] > # List of devices on the host that a
	I1128 00:04:58.366726   28506 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 00:04:58.366736   28506 command_runner.go:130] > # allowed_devices = [
	I1128 00:04:58.366743   28506 command_runner.go:130] > # 	"/dev/fuse",
	I1128 00:04:58.366749   28506 command_runner.go:130] > # ]
	I1128 00:04:58.366761   28506 command_runner.go:130] > # List of additional devices. specified as
	I1128 00:04:58.366777   28506 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 00:04:58.366790   28506 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 00:04:58.366831   28506 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 00:04:58.366842   28506 command_runner.go:130] > # additional_devices = [
	I1128 00:04:58.366848   28506 command_runner.go:130] > # ]
	I1128 00:04:58.366861   28506 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 00:04:58.366870   28506 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 00:04:58.366880   28506 command_runner.go:130] > # 	"/etc/cdi",
	I1128 00:04:58.366886   28506 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 00:04:58.366895   28506 command_runner.go:130] > # ]
	I1128 00:04:58.366904   28506 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 00:04:58.366913   28506 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 00:04:58.366924   28506 command_runner.go:130] > # Defaults to false.
	I1128 00:04:58.366937   28506 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 00:04:58.366957   28506 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 00:04:58.366970   28506 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 00:04:58.366980   28506 command_runner.go:130] > # hooks_dir = [
	I1128 00:04:58.366990   28506 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 00:04:58.366996   28506 command_runner.go:130] > # ]
	I1128 00:04:58.367005   28506 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 00:04:58.367020   28506 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 00:04:58.367031   28506 command_runner.go:130] > # its default mounts from the following two files:
	I1128 00:04:58.367040   28506 command_runner.go:130] > #
	I1128 00:04:58.367050   28506 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 00:04:58.367063   28506 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 00:04:58.367074   28506 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 00:04:58.367080   28506 command_runner.go:130] > #
	I1128 00:04:58.367090   28506 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 00:04:58.367104   28506 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 00:04:58.367117   28506 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 00:04:58.367129   28506 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 00:04:58.367135   28506 command_runner.go:130] > #
	I1128 00:04:58.367145   28506 command_runner.go:130] > # default_mounts_file = ""
	I1128 00:04:58.367157   28506 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 00:04:58.367167   28506 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 00:04:58.367173   28506 command_runner.go:130] > pids_limit = 1024
	I1128 00:04:58.367187   28506 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 00:04:58.367201   28506 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 00:04:58.367211   28506 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 00:04:58.367226   28506 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 00:04:58.367236   28506 command_runner.go:130] > # log_size_max = -1
	I1128 00:04:58.367246   28506 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 00:04:58.367261   28506 command_runner.go:130] > # log_to_journald = false
	I1128 00:04:58.367272   28506 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 00:04:58.367281   28506 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 00:04:58.367289   28506 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 00:04:58.367298   28506 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 00:04:58.367307   28506 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 00:04:58.367317   28506 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 00:04:58.367327   28506 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 00:04:58.367338   28506 command_runner.go:130] > # read_only = false
	I1128 00:04:58.367348   28506 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 00:04:58.367362   28506 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 00:04:58.367373   28506 command_runner.go:130] > # live configuration reload.
	I1128 00:04:58.367390   28506 command_runner.go:130] > # log_level = "info"
	I1128 00:04:58.367402   28506 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 00:04:58.367411   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:04:58.367419   28506 command_runner.go:130] > # log_filter = ""
	I1128 00:04:58.367426   28506 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 00:04:58.367437   28506 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 00:04:58.367448   28506 command_runner.go:130] > # separated by comma.
	I1128 00:04:58.367457   28506 command_runner.go:130] > # uid_mappings = ""
	I1128 00:04:58.367468   28506 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 00:04:58.367480   28506 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 00:04:58.367490   28506 command_runner.go:130] > # separated by comma.
	I1128 00:04:58.367497   28506 command_runner.go:130] > # gid_mappings = ""
	I1128 00:04:58.367507   28506 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 00:04:58.367516   28506 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 00:04:58.367533   28506 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 00:04:58.367543   28506 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 00:04:58.367554   28506 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 00:04:58.367567   28506 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 00:04:58.367580   28506 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 00:04:58.367588   28506 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 00:04:58.367595   28506 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 00:04:58.367607   28506 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 00:04:58.367621   28506 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 00:04:58.367631   28506 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 00:04:58.367641   28506 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 00:04:58.367654   28506 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 00:04:58.367721   28506 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 00:04:58.367745   28506 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 00:04:58.367761   28506 command_runner.go:130] > drop_infra_ctr = false
	I1128 00:04:58.367774   28506 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 00:04:58.367786   28506 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 00:04:58.367800   28506 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 00:04:58.367821   28506 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 00:04:58.367834   28506 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 00:04:58.367846   28506 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 00:04:58.367854   28506 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 00:04:58.367869   28506 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 00:04:58.367878   28506 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1128 00:04:58.367886   28506 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 00:04:58.367896   28506 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 00:04:58.367907   28506 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 00:04:58.367914   28506 command_runner.go:130] > # default_runtime = "runc"
	I1128 00:04:58.367923   28506 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 00:04:58.367935   28506 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 00:04:58.367949   28506 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 00:04:58.367958   28506 command_runner.go:130] > # creation as a file is not desired either.
	I1128 00:04:58.367970   28506 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 00:04:58.367978   28506 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 00:04:58.367990   28506 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 00:04:58.367996   28506 command_runner.go:130] > # ]
	I1128 00:04:58.368011   28506 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 00:04:58.368025   28506 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 00:04:58.368039   28506 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 00:04:58.368049   28506 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 00:04:58.368056   28506 command_runner.go:130] > #
	I1128 00:04:58.368065   28506 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 00:04:58.368077   28506 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 00:04:58.368085   28506 command_runner.go:130] > #  runtime_type = "oci"
	I1128 00:04:58.368096   28506 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 00:04:58.368107   28506 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 00:04:58.368118   28506 command_runner.go:130] > #  allowed_annotations = []
	I1128 00:04:58.368126   28506 command_runner.go:130] > # Where:
	I1128 00:04:58.368137   28506 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 00:04:58.368143   28506 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 00:04:58.368157   28506 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 00:04:58.368171   28506 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 00:04:58.368181   28506 command_runner.go:130] > #   in $PATH.
	I1128 00:04:58.368192   28506 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 00:04:58.368206   28506 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 00:04:58.368220   28506 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 00:04:58.368226   28506 command_runner.go:130] > #   state.
	I1128 00:04:58.368234   28506 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 00:04:58.368248   28506 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 00:04:58.368262   28506 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 00:04:58.368276   28506 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 00:04:58.368289   28506 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 00:04:58.368303   28506 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 00:04:58.368312   28506 command_runner.go:130] > #   The currently recognized values are:
	I1128 00:04:58.368320   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 00:04:58.368336   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 00:04:58.368349   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 00:04:58.368362   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 00:04:58.368377   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 00:04:58.368391   28506 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 00:04:58.368399   28506 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 00:04:58.368412   28506 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 00:04:58.368428   28506 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 00:04:58.368439   28506 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 00:04:58.368446   28506 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1128 00:04:58.368456   28506 command_runner.go:130] > runtime_type = "oci"
	I1128 00:04:58.368463   28506 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 00:04:58.368473   28506 command_runner.go:130] > runtime_config_path = ""
	I1128 00:04:58.368478   28506 command_runner.go:130] > monitor_path = ""
	I1128 00:04:58.368485   28506 command_runner.go:130] > monitor_cgroup = ""
	I1128 00:04:58.368502   28506 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 00:04:58.368516   28506 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 00:04:58.368527   28506 command_runner.go:130] > # running containers
	I1128 00:04:58.368534   28506 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 00:04:58.368547   28506 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 00:04:58.368619   28506 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 00:04:58.368633   28506 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 00:04:58.368645   28506 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 00:04:58.368654   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 00:04:58.368659   28506 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 00:04:58.368673   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 00:04:58.368685   28506 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 00:04:58.368696   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 00:04:58.368710   28506 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 00:04:58.368722   28506 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 00:04:58.368734   28506 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 00:04:58.368745   28506 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 00:04:58.368767   28506 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 00:04:58.368781   28506 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 00:04:58.368796   28506 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 00:04:58.368811   28506 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 00:04:58.368824   28506 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 00:04:58.368834   28506 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 00:04:58.368843   28506 command_runner.go:130] > # Example:
	I1128 00:04:58.368852   28506 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 00:04:58.368864   28506 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 00:04:58.368873   28506 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 00:04:58.368881   28506 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 00:04:58.368890   28506 command_runner.go:130] > # cpuset = 0
	I1128 00:04:58.368897   28506 command_runner.go:130] > # cpushares = "0-1"
	I1128 00:04:58.368906   28506 command_runner.go:130] > # Where:
	I1128 00:04:58.368913   28506 command_runner.go:130] > # The workload name is workload-type.
	I1128 00:04:58.368924   28506 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 00:04:58.368934   28506 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 00:04:58.368947   28506 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 00:04:58.368963   28506 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 00:04:58.368975   28506 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 00:04:58.368984   28506 command_runner.go:130] > # 
	I1128 00:04:58.368994   28506 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 00:04:58.369001   28506 command_runner.go:130] > #
	I1128 00:04:58.369007   28506 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 00:04:58.369020   28506 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 00:04:58.369035   28506 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 00:04:58.369048   28506 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 00:04:58.369060   28506 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 00:04:58.369069   28506 command_runner.go:130] > [crio.image]
	I1128 00:04:58.369083   28506 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 00:04:58.369091   28506 command_runner.go:130] > # default_transport = "docker://"
	I1128 00:04:58.369101   28506 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 00:04:58.369116   28506 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 00:04:58.369126   28506 command_runner.go:130] > # global_auth_file = ""
	I1128 00:04:58.369136   28506 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 00:04:58.369147   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:04:58.369156   28506 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 00:04:58.369169   28506 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 00:04:58.369177   28506 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 00:04:58.369186   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:04:58.369197   28506 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 00:04:58.369210   28506 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 00:04:58.369224   28506 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 00:04:58.369234   28506 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 00:04:58.369246   28506 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 00:04:58.369256   28506 command_runner.go:130] > # pause_command = "/pause"
	I1128 00:04:58.369262   28506 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 00:04:58.369279   28506 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 00:04:58.369293   28506 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 00:04:58.369304   28506 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 00:04:58.369312   28506 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 00:04:58.369319   28506 command_runner.go:130] > # signature_policy = ""
	I1128 00:04:58.369329   28506 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 00:04:58.369339   28506 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 00:04:58.369343   28506 command_runner.go:130] > # changing them here.
	I1128 00:04:58.369347   28506 command_runner.go:130] > # insecure_registries = [
	I1128 00:04:58.369350   28506 command_runner.go:130] > # ]
	I1128 00:04:58.369366   28506 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 00:04:58.369375   28506 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 00:04:58.369382   28506 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 00:04:58.369390   28506 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 00:04:58.369408   28506 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 00:04:58.369419   28506 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 00:04:58.369424   28506 command_runner.go:130] > # CNI plugins.
	I1128 00:04:58.369428   28506 command_runner.go:130] > [crio.network]
	I1128 00:04:58.369438   28506 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 00:04:58.369447   28506 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 00:04:58.369454   28506 command_runner.go:130] > # cni_default_network = ""
	I1128 00:04:58.369464   28506 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 00:04:58.369472   28506 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 00:04:58.369481   28506 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 00:04:58.369488   28506 command_runner.go:130] > # plugin_dirs = [
	I1128 00:04:58.369495   28506 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 00:04:58.369500   28506 command_runner.go:130] > # ]
	I1128 00:04:58.369510   28506 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 00:04:58.369518   28506 command_runner.go:130] > [crio.metrics]
	I1128 00:04:58.369523   28506 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 00:04:58.369527   28506 command_runner.go:130] > enable_metrics = true
	I1128 00:04:58.369532   28506 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 00:04:58.369539   28506 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 00:04:58.369550   28506 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 00:04:58.369564   28506 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 00:04:58.369581   28506 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 00:04:58.369593   28506 command_runner.go:130] > # metrics_collectors = [
	I1128 00:04:58.369603   28506 command_runner.go:130] > # 	"operations",
	I1128 00:04:58.369612   28506 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 00:04:58.369620   28506 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 00:04:58.369624   28506 command_runner.go:130] > # 	"operations_errors",
	I1128 00:04:58.369629   28506 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 00:04:58.369634   28506 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 00:04:58.369639   28506 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 00:04:58.369643   28506 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 00:04:58.369649   28506 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 00:04:58.369654   28506 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 00:04:58.369660   28506 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 00:04:58.369664   28506 command_runner.go:130] > # 	"containers_oom_total",
	I1128 00:04:58.369669   28506 command_runner.go:130] > # 	"containers_oom",
	I1128 00:04:58.369675   28506 command_runner.go:130] > # 	"processes_defunct",
	I1128 00:04:58.369679   28506 command_runner.go:130] > # 	"operations_total",
	I1128 00:04:58.369684   28506 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 00:04:58.369693   28506 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 00:04:58.369709   28506 command_runner.go:130] > # 	"operations_errors_total",
	I1128 00:04:58.369719   28506 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 00:04:58.369730   28506 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 00:04:58.369739   28506 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 00:04:58.369749   28506 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 00:04:58.369760   28506 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 00:04:58.369766   28506 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 00:04:58.369773   28506 command_runner.go:130] > # ]
	I1128 00:04:58.369778   28506 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 00:04:58.369782   28506 command_runner.go:130] > # metrics_port = 9090
	I1128 00:04:58.369787   28506 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 00:04:58.369791   28506 command_runner.go:130] > # metrics_socket = ""
	I1128 00:04:58.369797   28506 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 00:04:58.369805   28506 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 00:04:58.369811   28506 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 00:04:58.369819   28506 command_runner.go:130] > # certificate on any modification event.
	I1128 00:04:58.369823   28506 command_runner.go:130] > # metrics_cert = ""
	I1128 00:04:58.369829   28506 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 00:04:58.369836   28506 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 00:04:58.369842   28506 command_runner.go:130] > # metrics_key = ""
	I1128 00:04:58.369848   28506 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 00:04:58.369854   28506 command_runner.go:130] > [crio.tracing]
	I1128 00:04:58.369860   28506 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 00:04:58.369864   28506 command_runner.go:130] > # enable_tracing = false
	I1128 00:04:58.369871   28506 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 00:04:58.369876   28506 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 00:04:58.369883   28506 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 00:04:58.369887   28506 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 00:04:58.369896   28506 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 00:04:58.369900   28506 command_runner.go:130] > [crio.stats]
	I1128 00:04:58.369908   28506 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 00:04:58.369913   28506 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 00:04:58.369923   28506 command_runner.go:130] > # stats_collection_period = 0
	I1128 00:04:58.370010   28506 cni.go:84] Creating CNI manager for ""
	I1128 00:04:58.370020   28506 cni.go:136] 3 nodes found, recommending kindnet
	I1128 00:04:58.370037   28506 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:04:58.370062   28506 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-883509 NodeName:multinode-883509 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:04:58.370175   28506 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-883509"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:04:58.370240   28506 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-883509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:04:58.370292   28506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:04:58.381293   28506 command_runner.go:130] > kubeadm
	I1128 00:04:58.381310   28506 command_runner.go:130] > kubectl
	I1128 00:04:58.381316   28506 command_runner.go:130] > kubelet
	I1128 00:04:58.381542   28506 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:04:58.381603   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:04:58.391861   28506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1128 00:04:58.408822   28506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:04:58.427130   28506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1128 00:04:58.444905   28506 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1128 00:04:58.448832   28506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:04:58.459823   28506 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509 for IP: 192.168.39.159
	I1128 00:04:58.459848   28506 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:04:58.459993   28506 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:04:58.460062   28506 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:04:58.460151   28506 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key
	I1128 00:04:58.460356   28506 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key.b15c5797
	I1128 00:04:58.460431   28506 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.key
	I1128 00:04:58.460444   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1128 00:04:58.460461   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1128 00:04:58.460479   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1128 00:04:58.460499   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1128 00:04:58.460515   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 00:04:58.460533   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 00:04:58.460549   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 00:04:58.460564   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 00:04:58.460638   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:04:58.460676   28506 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:04:58.460696   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:04:58.460727   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:04:58.460774   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:04:58.460811   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:04:58.460861   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:04:58.460898   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem -> /usr/share/ca-certificates/11930.pem
	I1128 00:04:58.460917   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /usr/share/ca-certificates/119302.pem
	I1128 00:04:58.460935   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:04:58.461490   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:04:58.488727   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:04:58.512899   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:04:58.536133   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:04:58.559828   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:04:58.581155   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:04:58.602998   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:04:58.624705   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:04:58.646040   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:04:58.666964   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:04:58.688115   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:04:58.711352   28506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:04:58.728319   28506 ssh_runner.go:195] Run: openssl version
	I1128 00:04:58.734234   28506 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1128 00:04:58.734315   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:04:58.745478   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:04:58.750908   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:04:58.751005   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:04:58.751075   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:04:58.756881   28506 command_runner.go:130] > b5213941
	I1128 00:04:58.757038   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:04:58.768781   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:04:58.780057   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:04:58.784766   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:04:58.784947   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:04:58.784999   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:04:58.790406   28506 command_runner.go:130] > 51391683
	I1128 00:04:58.790756   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:04:58.801512   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:04:58.812552   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:04:58.817161   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:04:58.817185   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:04:58.817229   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:04:58.822439   28506 command_runner.go:130] > 3ec20f2e
	I1128 00:04:58.822755   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:04:58.832766   28506 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:04:58.836959   28506 command_runner.go:130] > ca.crt
	I1128 00:04:58.836977   28506 command_runner.go:130] > ca.key
	I1128 00:04:58.836984   28506 command_runner.go:130] > healthcheck-client.crt
	I1128 00:04:58.836991   28506 command_runner.go:130] > healthcheck-client.key
	I1128 00:04:58.837005   28506 command_runner.go:130] > peer.crt
	I1128 00:04:58.837010   28506 command_runner.go:130] > peer.key
	I1128 00:04:58.837016   28506 command_runner.go:130] > server.crt
	I1128 00:04:58.837021   28506 command_runner.go:130] > server.key
	I1128 00:04:58.837078   28506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:04:58.842960   28506 command_runner.go:130] > Certificate will not expire
	I1128 00:04:58.843219   28506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:04:58.849301   28506 command_runner.go:130] > Certificate will not expire
	I1128 00:04:58.849734   28506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:04:58.855395   28506 command_runner.go:130] > Certificate will not expire
	I1128 00:04:58.855617   28506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:04:58.861006   28506 command_runner.go:130] > Certificate will not expire
	I1128 00:04:58.861208   28506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:04:58.866885   28506 command_runner.go:130] > Certificate will not expire
	I1128 00:04:58.867009   28506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:04:58.873065   28506 command_runner.go:130] > Certificate will not expire
	I1128 00:04:58.873383   28506 kubeadm.go:404] StartCluster: {Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:04:58.873531   28506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:04:58.873588   28506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:04:58.915363   28506 cri.go:89] found id: ""
	I1128 00:04:58.915427   28506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:04:58.927216   28506 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1128 00:04:58.927243   28506 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1128 00:04:58.927252   28506 command_runner.go:130] > /var/lib/minikube/etcd:
	I1128 00:04:58.927257   28506 command_runner.go:130] > member
	I1128 00:04:58.927271   28506 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:04:58.927277   28506 kubeadm.go:636] restartCluster start
	I1128 00:04:58.927321   28506 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:04:58.937302   28506 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:04:58.938140   28506 kubeconfig.go:92] found "multinode-883509" server: "https://192.168.39.159:8443"
	I1128 00:04:58.938852   28506 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:04:58.939169   28506 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:04:58.939948   28506 cert_rotation.go:137] Starting client certificate rotation controller
	I1128 00:04:58.940176   28506 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:04:58.950258   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:04:58.950311   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:04:58.962829   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:04:58.962850   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:04:58.962898   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:04:58.974475   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:04:59.475204   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:04:59.475313   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:04:59.487943   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:04:59.975503   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:04:59.975585   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:04:59.988901   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:00.475579   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:00.482812   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:00.495078   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:00.974681   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:00.974783   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:00.989538   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:01.475111   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:01.475202   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:01.487707   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:01.975408   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:01.975582   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:01.988500   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:02.475062   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:02.475136   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:02.487741   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:02.975395   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:02.975468   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:02.987588   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:03.474602   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:03.474702   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:03.488686   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:03.975350   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:03.975445   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:03.987538   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:04.475191   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:04.475271   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:04.487701   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:04.975317   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:04.975416   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:04.988692   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:05.475533   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:05.475612   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:05.489506   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:05.975101   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:05.975174   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:05.987850   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:06.475519   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:06.475612   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:06.489749   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:06.975454   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:06.975540   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:06.987347   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:07.474844   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:07.474926   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:07.488345   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:07.974801   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:07.974875   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:07.989419   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:08.475527   28506 api_server.go:166] Checking apiserver status ...
	I1128 00:05:08.475594   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:05:08.488788   28506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:05:08.950533   28506 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:05:08.950569   28506 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:05:08.950582   28506 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:05:08.950633   28506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:05:08.991618   28506 cri.go:89] found id: ""
	I1128 00:05:08.991694   28506 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:05:09.008246   28506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:05:09.018030   28506 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1128 00:05:09.018062   28506 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1128 00:05:09.018074   28506 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1128 00:05:09.018110   28506 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:05:09.018227   28506 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:05:09.018283   28506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:05:09.028392   28506 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:05:09.028413   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:05:09.128912   28506 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:05:09.129321   28506 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1128 00:05:09.129874   28506 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1128 00:05:09.130351   28506 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:05:09.131189   28506 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1128 00:05:09.131494   28506 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:05:09.132346   28506 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1128 00:05:09.132898   28506 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1128 00:05:09.133337   28506 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:05:09.133813   28506 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:05:09.134283   28506 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:05:09.135091   28506 command_runner.go:130] > [certs] Using the existing "sa" key
	I1128 00:05:09.136354   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:05:10.098027   28506 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:05:10.098054   28506 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:05:10.098064   28506 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:05:10.098073   28506 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:05:10.098082   28506 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:05:10.098146   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:05:10.294324   28506 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:05:10.294350   28506 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:05:10.294355   28506 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 00:05:10.294375   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:05:10.364908   28506 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:05:10.364938   28506 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:05:10.367068   28506 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:05:10.368011   28506 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:05:10.370207   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:05:10.438281   28506 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:05:10.438329   28506 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:05:10.438392   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:10.450641   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:10.967570   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:11.467097   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:11.967969   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:12.467180   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:12.967979   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:12.990474   28506 command_runner.go:130] > 1107
	I1128 00:05:12.992473   28506 api_server.go:72] duration metric: took 2.554140973s to wait for apiserver process to appear ...
	I1128 00:05:12.992514   28506 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:05:12.992540   28506 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1128 00:05:16.316745   28506 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:05:16.316789   28506 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:05:16.316801   28506 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1128 00:05:16.384528   28506 api_server.go:279] https://192.168.39.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:05:16.384562   28506 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:05:16.885223   28506 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1128 00:05:16.898995   28506 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:05:16.899033   28506 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:05:17.385698   28506 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1128 00:05:17.391186   28506 api_server.go:279] https://192.168.39.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:05:17.391212   28506 api_server.go:103] status: https://192.168.39.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:05:17.884808   28506 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1128 00:05:17.890146   28506 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1128 00:05:17.890240   28506 round_trippers.go:463] GET https://192.168.39.159:8443/version
	I1128 00:05:17.890254   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:17.890266   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:17.890278   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:17.898770   28506 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1128 00:05:17.898794   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:17.898801   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:17.898807   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:17.898812   28506 round_trippers.go:580]     Content-Length: 264
	I1128 00:05:17.898817   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:17 GMT
	I1128 00:05:17.898822   28506 round_trippers.go:580]     Audit-Id: 9b6bf794-edcc-46cc-8e59-ce7786262cde
	I1128 00:05:17.898827   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:17.898832   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:17.898850   28506 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1128 00:05:17.898914   28506 api_server.go:141] control plane version: v1.28.4
	I1128 00:05:17.898930   28506 api_server.go:131] duration metric: took 4.906406519s to wait for apiserver health ...
	I1128 00:05:17.898943   28506 cni.go:84] Creating CNI manager for ""
	I1128 00:05:17.898950   28506 cni.go:136] 3 nodes found, recommending kindnet
	I1128 00:05:17.900684   28506 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1128 00:05:17.902093   28506 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 00:05:17.907129   28506 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 00:05:17.907162   28506 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1128 00:05:17.907172   28506 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1128 00:05:17.907199   28506 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 00:05:17.907209   28506 command_runner.go:130] > Access: 2023-11-28 00:04:46.123158983 +0000
	I1128 00:05:17.907215   28506 command_runner.go:130] > Modify: 2023-11-27 22:54:55.000000000 +0000
	I1128 00:05:17.907222   28506 command_runner.go:130] > Change: 2023-11-28 00:04:44.129158983 +0000
	I1128 00:05:17.907226   28506 command_runner.go:130] >  Birth: -
	I1128 00:05:17.907787   28506 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 00:05:17.907807   28506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 00:05:17.928436   28506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 00:05:19.056001   28506 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1128 00:05:19.056029   28506 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1128 00:05:19.056039   28506 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1128 00:05:19.056046   28506 command_runner.go:130] > daemonset.apps/kindnet configured
	I1128 00:05:19.056071   28506 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.12760216s)
	I1128 00:05:19.056097   28506 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:05:19.056200   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:05:19.056209   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.056217   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.056223   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.063760   28506 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1128 00:05:19.063789   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.063799   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.063807   28506 round_trippers.go:580]     Audit-Id: d731aef8-67a9-4b3a-b456-d03bb2272b9b
	I1128 00:05:19.063818   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.063830   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.063841   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.063850   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.065533   28506 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82638 chars]
	I1128 00:05:19.069633   28506 system_pods.go:59] 12 kube-system pods found
	I1128 00:05:19.069671   28506 system_pods.go:61] "coredns-5dd5756b68-9vws5" [66ac3c18-9997-49aa-a154-ade69c138f12] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:05:19.069682   28506 system_pods.go:61] "etcd-multinode-883509" [58bb8943-0a7c-4d4c-a090-ea8de587f504] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:05:19.069692   28506 system_pods.go:61] "kindnet-t4wlq" [ab1a3a4e-2d8d-49cd-bbe5-1e52fa0b4350] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 00:05:19.069699   28506 system_pods.go:61] "kindnet-xtnn9" [f708ed6f-b1dd-4fb5-9e07-15fcd79c82c5] Running
	I1128 00:05:19.069711   28506 system_pods.go:61] "kindnet-ztt77" [acbfe061-9a56-4999-baed-ef8d73dc9222] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 00:05:19.069718   28506 system_pods.go:61] "kube-apiserver-multinode-883509" [0a144c07-5db8-418a-ad15-110fabc7f377] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:05:19.069724   28506 system_pods.go:61] "kube-controller-manager-multinode-883509" [f8474e48-c333-4772-ae1f-59cdb2bf95eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:05:19.069729   28506 system_pods.go:61] "kube-proxy-6dvv4" [c6651c7d-33a2-4a46-9d73-e60ee19557fa] Running
	I1128 00:05:19.069733   28506 system_pods.go:61] "kube-proxy-7g246" [c03a2053-f013-4269-a5e1-0acfebfc606c] Running
	I1128 00:05:19.069738   28506 system_pods.go:61] "kube-proxy-fvsj6" [d0e7a02e-868c-4774-885c-8b5ad728f451] Running
	I1128 00:05:19.069746   28506 system_pods.go:61] "kube-scheduler-multinode-883509" [191f6a8c-7604-4f03-ba5a-d717b27f634b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:05:19.069750   28506 system_pods.go:61] "storage-provisioner" [e59cdfcb-f7c6-4be9-a2e1-0931d582343c] Running
	I1128 00:05:19.069756   28506 system_pods.go:74] duration metric: took 13.653421ms to wait for pod list to return data ...
	I1128 00:05:19.069767   28506 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:05:19.069830   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I1128 00:05:19.069842   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.069853   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.069866   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.077253   28506 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1128 00:05:19.077272   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.077279   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.077284   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.077292   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.077297   28506 round_trippers.go:580]     Audit-Id: 4c8ab93d-25d8-4d94-a780-c5956c29e299
	I1128 00:05:19.077302   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.077307   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.078482   28506 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"853"},"items":[{"metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15371 chars]
	I1128 00:05:19.079680   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:05:19.079712   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:05:19.079726   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:05:19.079733   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:05:19.079738   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:05:19.079744   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:05:19.079749   28506 node_conditions.go:105] duration metric: took 9.976954ms to run NodePressure ...
	I1128 00:05:19.079816   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:05:19.313310   28506 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1128 00:05:19.313338   28506 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1128 00:05:19.313365   28506 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:05:19.313467   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1128 00:05:19.313478   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.313489   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.313501   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.316368   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:19.316386   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.316393   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.316398   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.316404   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.316409   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.316413   28506 round_trippers.go:580]     Audit-Id: 2e65ff79-e99e-4e46-b99f-f257702bbf2f
	I1128 00:05:19.316460   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.317207   28506 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"863"},"items":[{"metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"781","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1128 00:05:19.318196   28506 kubeadm.go:787] kubelet initialised
	I1128 00:05:19.318214   28506 kubeadm.go:788] duration metric: took 4.837354ms waiting for restarted kubelet to initialise ...
	I1128 00:05:19.318221   28506 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:05:19.318287   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:05:19.318296   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.318303   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.318310   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.321525   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:19.321549   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.321559   28506 round_trippers.go:580]     Audit-Id: fc666c2c-17ad-47ce-aae4-3409fee61608
	I1128 00:05:19.321567   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.321575   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.321584   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.321592   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.321599   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.323145   28506 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"863"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82638 chars]
	I1128 00:05:19.326663   28506 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:19.326744   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:19.326759   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.326770   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.326783   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.330146   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:19.330167   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.330176   28506 round_trippers.go:580]     Audit-Id: 46060cc0-0011-41ea-a9ad-8e3f9d3c4c36
	I1128 00:05:19.330185   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.330194   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.330206   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.330215   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.330225   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.330370   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:19.330738   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:19.330750   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.330757   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.330763   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.334178   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:19.334197   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.334203   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.334210   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.334218   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.334227   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.334234   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.334246   28506 round_trippers.go:580]     Audit-Id: 23bf65d6-6577-4ff7-b217-1b067e158172
	I1128 00:05:19.334425   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:19.334690   28506 pod_ready.go:97] node "multinode-883509" hosting pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:19.334705   28506 pod_ready.go:81] duration metric: took 8.01915ms waiting for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	E1128 00:05:19.334713   28506 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-883509" hosting pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:19.334730   28506 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:19.334791   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1128 00:05:19.334800   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.334806   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.334812   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.337457   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:19.337476   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.337485   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.337491   28506 round_trippers.go:580]     Audit-Id: bac71344-6e5d-4ab3-a055-792d6de51f3d
	I1128 00:05:19.337499   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.337504   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.337509   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.337515   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.337988   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"781","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1128 00:05:19.338286   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:19.338298   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.338305   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.338311   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.341285   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:19.341304   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.341312   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.341324   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.341337   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.341344   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.341361   28506 round_trippers.go:580]     Audit-Id: 4300354e-5579-4f4a-92e7-98bf4b369468
	I1128 00:05:19.341369   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.341525   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:19.341918   28506 pod_ready.go:97] node "multinode-883509" hosting pod "etcd-multinode-883509" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:19.341949   28506 pod_ready.go:81] duration metric: took 7.205324ms waiting for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	E1128 00:05:19.341968   28506 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-883509" hosting pod "etcd-multinode-883509" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:19.341984   28506 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:19.342043   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-883509
	I1128 00:05:19.342053   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.342063   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.342072   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.349031   28506 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1128 00:05:19.349063   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.349073   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.349082   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.349091   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.349106   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.349112   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.349117   28506 round_trippers.go:580]     Audit-Id: 4b582455-e66d-4a11-b2d8-bd6cfd33f16c
	I1128 00:05:19.349650   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-883509","namespace":"kube-system","uid":"0a144c07-5db8-418a-ad15-110fabc7f377","resourceVersion":"770","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.159:8443","kubernetes.io/config.hash":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.mirror":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.seen":"2023-11-27T23:54:53.116543447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1128 00:05:19.350097   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:19.350113   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.350122   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.350132   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.353651   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:19.353673   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.353683   28506 round_trippers.go:580]     Audit-Id: 0c737fbd-a8f6-4f0c-bc9a-bfa1faab8b0f
	I1128 00:05:19.353691   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.353705   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.353714   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.353729   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.353742   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.353889   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:19.354270   28506 pod_ready.go:97] node "multinode-883509" hosting pod "kube-apiserver-multinode-883509" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:19.354292   28506 pod_ready.go:81] duration metric: took 12.298012ms waiting for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	E1128 00:05:19.354301   28506 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-883509" hosting pod "kube-apiserver-multinode-883509" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:19.354311   28506 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:19.354351   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-883509
	I1128 00:05:19.354360   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.354367   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.354384   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.356152   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:19.356166   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.356173   28506 round_trippers.go:580]     Audit-Id: fe79e73a-94e1-443f-9e19-1176b51094a3
	I1128 00:05:19.356178   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.356187   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.356195   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.356209   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.356225   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.356370   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-883509","namespace":"kube-system","uid":"f8474e48-c333-4772-ae1f-59cdb2bf95eb","resourceVersion":"773","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.mirror":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.seen":"2023-11-27T23:54:53.116544230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1128 00:05:19.457085   28506 request.go:629] Waited for 100.25649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:19.457166   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:19.457171   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.457186   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.457201   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.460197   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:19.460219   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.460226   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.460233   28506 round_trippers.go:580]     Audit-Id: 4d708e06-6f48-4e9b-8fe9-dae2b386e770
	I1128 00:05:19.460242   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.460255   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.460264   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.460277   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.460479   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:19.460818   28506 pod_ready.go:97] node "multinode-883509" hosting pod "kube-controller-manager-multinode-883509" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:19.460837   28506 pod_ready.go:81] duration metric: took 106.51965ms waiting for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	E1128 00:05:19.460849   28506 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-883509" hosting pod "kube-controller-manager-multinode-883509" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:19.460857   28506 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6dvv4" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:19.657262   28506 request.go:629] Waited for 196.345815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:05:19.657355   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:05:19.657369   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.657380   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.657394   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.660381   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:19.660402   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.660409   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.660420   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.660428   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.660438   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.660445   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.660453   28506 round_trippers.go:580]     Audit-Id: f55a476d-ba2b-42ec-95fb-642629d3f04b
	I1128 00:05:19.660672   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6dvv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c6651c7d-33a2-4a46-9d73-e60ee19557fa","resourceVersion":"726","creationTimestamp":"2023-11-27T23:56:37Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:56:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1128 00:05:19.856410   28506 request.go:629] Waited for 195.296956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:05:19.856486   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:05:19.856491   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:19.856499   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:19.856505   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:19.859998   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:19.860022   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:19.860029   28506 round_trippers.go:580]     Audit-Id: 936f7cce-7925-4021-85f3-cfc29b4be6d5
	I1128 00:05:19.860035   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:19.860043   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:19.860048   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:19.860059   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:19.860064   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:19 GMT
	I1128 00:05:19.860258   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m03","uid":"2bc47ce6-2761-4c93-b9f7-cf65c531732f","resourceVersion":"803","creationTimestamp":"2023-11-27T23:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:57:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I1128 00:05:19.860685   28506 pod_ready.go:92] pod "kube-proxy-6dvv4" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:19.860709   28506 pod_ready.go:81] duration metric: took 399.842337ms waiting for pod "kube-proxy-6dvv4" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:19.860721   28506 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:20.057178   28506 request.go:629] Waited for 196.367807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1128 00:05:20.057268   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1128 00:05:20.057290   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:20.057301   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:20.057314   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:20.060305   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:20.060330   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:20.060339   28506 round_trippers.go:580]     Audit-Id: 5663d322-cfab-4818-8fbb-e80e7b211e3c
	I1128 00:05:20.060346   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:20.060354   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:20.060362   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:20.060369   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:20.060384   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:20 GMT
	I1128 00:05:20.060827   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7g246","generateName":"kube-proxy-","namespace":"kube-system","uid":"c03a2053-f013-4269-a5e1-0acfebfc606c","resourceVersion":"810","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1128 00:05:20.256223   28506 request.go:629] Waited for 195.000036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:20.256285   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:20.256290   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:20.256298   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:20.256305   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:20.258719   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:20.258748   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:20.258758   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:20.258767   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:20.258774   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:20.258782   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:20 GMT
	I1128 00:05:20.258791   28506 round_trippers.go:580]     Audit-Id: 8981d133-859b-439f-819e-ad31aba345fe
	I1128 00:05:20.258799   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:20.259068   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:20.259567   28506 pod_ready.go:97] node "multinode-883509" hosting pod "kube-proxy-7g246" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:20.259595   28506 pod_ready.go:81] duration metric: took 398.861911ms waiting for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	E1128 00:05:20.259624   28506 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-883509" hosting pod "kube-proxy-7g246" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:20.259634   28506 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:20.456232   28506 request.go:629] Waited for 196.538816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1128 00:05:20.456313   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1128 00:05:20.456322   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:20.456330   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:20.456338   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:20.458947   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:20.458982   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:20.459004   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:20.459014   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:20.459026   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:20.459036   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:20.459042   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:20 GMT
	I1128 00:05:20.459048   28506 round_trippers.go:580]     Audit-Id: 1e16ea14-8c9d-42d5-b0d3-0bec87d5c366
	I1128 00:05:20.459740   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvsj6","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0e7a02e-868c-4774-885c-8b5ad728f451","resourceVersion":"519","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1128 00:05:20.656516   28506 request.go:629] Waited for 196.355253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:05:20.656591   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:05:20.656601   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:20.656613   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:20.656626   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:20.659804   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:20.659829   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:20.659836   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:20.659842   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:20.659854   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:20.659860   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:20.659868   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:20 GMT
	I1128 00:05:20.659875   28506 round_trippers.go:580]     Audit-Id: 09f1367d-5b47-41ed-9339-a5433ac91c81
	I1128 00:05:20.660099   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"783","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I1128 00:05:20.660457   28506 pod_ready.go:92] pod "kube-proxy-fvsj6" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:20.660478   28506 pod_ready.go:81] duration metric: took 400.832925ms waiting for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:20.660495   28506 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:20.856817   28506 request.go:629] Waited for 196.265831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1128 00:05:20.856902   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1128 00:05:20.856915   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:20.856938   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:20.856967   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:20.859503   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:20.859526   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:20.859536   28506 round_trippers.go:580]     Audit-Id: 77ad2bb7-8211-452a-841b-4b16345f46ea
	I1128 00:05:20.859545   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:20.859553   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:20.859569   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:20.859583   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:20.859591   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:20 GMT
	I1128 00:05:20.859848   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-883509","namespace":"kube-system","uid":"191f6a8c-7604-4f03-ba5a-d717b27f634b","resourceVersion":"776","creationTimestamp":"2023-11-27T23:54:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.mirror":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.seen":"2023-11-27T23:54:44.598174974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1128 00:05:21.056662   28506 request.go:629] Waited for 196.336324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:21.056722   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:21.056729   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:21.056744   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:21.056768   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:21.059484   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:21.059509   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:21.059527   28506 round_trippers.go:580]     Audit-Id: d66192bf-9c23-4c44-9d27-0411e6e509ec
	I1128 00:05:21.059536   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:21.059544   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:21.059552   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:21.059560   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:21.059568   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:21 GMT
	I1128 00:05:21.059750   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:21.060101   28506 pod_ready.go:97] node "multinode-883509" hosting pod "kube-scheduler-multinode-883509" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:21.060121   28506 pod_ready.go:81] duration metric: took 399.618699ms waiting for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	E1128 00:05:21.060129   28506 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-883509" hosting pod "kube-scheduler-multinode-883509" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-883509" has status "Ready":"False"
	I1128 00:05:21.060137   28506 pod_ready.go:38] duration metric: took 1.741908562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:05:21.060153   28506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:05:21.073425   28506 command_runner.go:130] > -16
	I1128 00:05:21.073453   28506 ops.go:34] apiserver oom_adj: -16
	I1128 00:05:21.073460   28506 kubeadm.go:640] restartCluster took 22.146178385s
	I1128 00:05:21.073468   28506 kubeadm.go:406] StartCluster complete in 22.20010947s
	I1128 00:05:21.073488   28506 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:21.073555   28506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:05:21.074163   28506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:05:21.074404   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:05:21.074504   28506 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:05:21.074645   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:05:21.077512   28506 out.go:177] * Enabled addons: 
	I1128 00:05:21.074674   28506 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:05:21.079101   28506 addons.go:502] enable addons completed in 4.598101ms: enabled=[]
	I1128 00:05:21.079343   28506 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:05:21.079700   28506 round_trippers.go:463] GET https://192.168.39.159:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 00:05:21.079714   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:21.079725   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:21.079735   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:21.083621   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:21.083635   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:21.083647   28506 round_trippers.go:580]     Audit-Id: fb106a58-ee71-436a-b28b-7825d4cb5fc9
	I1128 00:05:21.083653   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:21.083669   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:21.083676   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:21.083689   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:21.083700   28506 round_trippers.go:580]     Content-Length: 291
	I1128 00:05:21.083707   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:21 GMT
	I1128 00:05:21.084026   28506 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6e7cc9d5-ec42-4b16-9afb-9c3b43521ec6","resourceVersion":"862","creationTimestamp":"2023-11-27T23:54:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1128 00:05:21.084195   28506 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-883509" context rescaled to 1 replicas
	I1128 00:05:21.084222   28506 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:05:21.086718   28506 out.go:177] * Verifying Kubernetes components...
	I1128 00:05:21.088299   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:05:21.174000   28506 command_runner.go:130] > apiVersion: v1
	I1128 00:05:21.174030   28506 command_runner.go:130] > data:
	I1128 00:05:21.174034   28506 command_runner.go:130] >   Corefile: |
	I1128 00:05:21.174038   28506 command_runner.go:130] >     .:53 {
	I1128 00:05:21.174042   28506 command_runner.go:130] >         log
	I1128 00:05:21.174046   28506 command_runner.go:130] >         errors
	I1128 00:05:21.174050   28506 command_runner.go:130] >         health {
	I1128 00:05:21.174054   28506 command_runner.go:130] >            lameduck 5s
	I1128 00:05:21.174058   28506 command_runner.go:130] >         }
	I1128 00:05:21.174065   28506 command_runner.go:130] >         ready
	I1128 00:05:21.174071   28506 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1128 00:05:21.174075   28506 command_runner.go:130] >            pods insecure
	I1128 00:05:21.174084   28506 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1128 00:05:21.174094   28506 command_runner.go:130] >            ttl 30
	I1128 00:05:21.174105   28506 command_runner.go:130] >         }
	I1128 00:05:21.174112   28506 command_runner.go:130] >         prometheus :9153
	I1128 00:05:21.174122   28506 command_runner.go:130] >         hosts {
	I1128 00:05:21.174128   28506 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1128 00:05:21.174136   28506 command_runner.go:130] >            fallthrough
	I1128 00:05:21.174142   28506 command_runner.go:130] >         }
	I1128 00:05:21.174149   28506 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1128 00:05:21.174154   28506 command_runner.go:130] >            max_concurrent 1000
	I1128 00:05:21.174158   28506 command_runner.go:130] >         }
	I1128 00:05:21.174161   28506 command_runner.go:130] >         cache 30
	I1128 00:05:21.174166   28506 command_runner.go:130] >         loop
	I1128 00:05:21.174170   28506 command_runner.go:130] >         reload
	I1128 00:05:21.174181   28506 command_runner.go:130] >         loadbalance
	I1128 00:05:21.174191   28506 command_runner.go:130] >     }
	I1128 00:05:21.174198   28506 command_runner.go:130] > kind: ConfigMap
	I1128 00:05:21.174208   28506 command_runner.go:130] > metadata:
	I1128 00:05:21.174217   28506 command_runner.go:130] >   creationTimestamp: "2023-11-27T23:54:52Z"
	I1128 00:05:21.174226   28506 command_runner.go:130] >   name: coredns
	I1128 00:05:21.174233   28506 command_runner.go:130] >   namespace: kube-system
	I1128 00:05:21.174240   28506 command_runner.go:130] >   resourceVersion: "404"
	I1128 00:05:21.174245   28506 command_runner.go:130] >   uid: d6785235-40c2-4ba1-9508-c3c6363ea59f
	I1128 00:05:21.176415   28506 node_ready.go:35] waiting up to 6m0s for node "multinode-883509" to be "Ready" ...
	I1128 00:05:21.176438   28506 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 00:05:21.256745   28506 request.go:629] Waited for 80.240104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:21.256845   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:21.256856   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:21.256863   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:21.256869   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:21.259464   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:21.259484   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:21.259490   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:21.259496   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:21.259501   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:21 GMT
	I1128 00:05:21.259506   28506 round_trippers.go:580]     Audit-Id: 145ba717-9841-4a94-9fd5-a4483b98799b
	I1128 00:05:21.259523   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:21.259530   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:21.259825   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:21.456544   28506 request.go:629] Waited for 196.352435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:21.456625   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:21.456630   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:21.456638   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:21.456646   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:21.459175   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:21.459195   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:21.459202   28506 round_trippers.go:580]     Audit-Id: 28cdab6b-4373-4217-add2-7fbb009ac2c5
	I1128 00:05:21.459207   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:21.459212   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:21.459217   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:21.459222   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:21.459227   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:21 GMT
	I1128 00:05:21.459416   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:21.960562   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:21.960588   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:21.960597   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:21.960603   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:21.963596   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:21.963619   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:21.963630   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:21.963639   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:21.963646   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:21.963654   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:21.963661   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:21 GMT
	I1128 00:05:21.963669   28506 round_trippers.go:580]     Audit-Id: 2700428a-fa1b-49c0-a1f0-d8378aaad3fe
	I1128 00:05:21.963841   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:22.460549   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:22.460574   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:22.460582   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:22.460588   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:22.463516   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:22.463548   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:22.463555   28506 round_trippers.go:580]     Audit-Id: e4e1eeca-d9cf-46ea-a0f4-863ee59325e3
	I1128 00:05:22.463561   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:22.463566   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:22.463571   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:22.463576   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:22.463583   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:22 GMT
	I1128 00:05:22.464344   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:22.959983   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:22.960011   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:22.960024   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:22.960032   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:22.962882   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:22.962909   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:22.962920   28506 round_trippers.go:580]     Audit-Id: 429ee486-8242-4540-a8a3-50fa715816a1
	I1128 00:05:22.962928   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:22.962940   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:22.962953   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:22.962974   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:22.962988   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:22 GMT
	I1128 00:05:22.963583   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:23.460776   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:23.460804   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:23.460819   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:23.460829   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:23.463773   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:23.463793   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:23.463803   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:23.463811   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:23 GMT
	I1128 00:05:23.463820   28506 round_trippers.go:580]     Audit-Id: 0a22538a-e806-4da4-922f-b516b8b17e55
	I1128 00:05:23.463830   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:23.463839   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:23.463848   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:23.464280   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:23.464584   28506 node_ready.go:58] node "multinode-883509" has status "Ready":"False"
	I1128 00:05:23.959972   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:23.959999   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:23.960011   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:23.960022   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:23.964307   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:05:23.964337   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:23.964349   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:23.964356   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:23.964363   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:23.964371   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:23 GMT
	I1128 00:05:23.964380   28506 round_trippers.go:580]     Audit-Id: 8d65e36e-9029-4287-beba-4f5bdcf043de
	I1128 00:05:23.964389   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:23.964851   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:24.460582   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:24.460612   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:24.460635   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:24.460644   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:24.463988   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:24.464009   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:24.464020   28506 round_trippers.go:580]     Audit-Id: 72b0ee99-91b3-4645-b96b-d68b5472e130
	I1128 00:05:24.464027   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:24.464036   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:24.464043   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:24.464051   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:24.464059   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:24 GMT
	I1128 00:05:24.464676   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:24.960290   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:24.960318   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:24.960334   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:24.960343   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:24.963005   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:24.963046   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:24.963058   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:24.963063   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:24.963071   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:24 GMT
	I1128 00:05:24.963078   28506 round_trippers.go:580]     Audit-Id: 71418f10-836c-433a-a431-74f81411b8c0
	I1128 00:05:24.963083   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:24.963091   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:24.963533   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:25.460490   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:25.460514   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:25.460524   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:25.460532   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:25.464369   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:25.464395   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:25.464405   28506 round_trippers.go:580]     Audit-Id: 9aa96952-ff27-4756-9916-b0e8a15ddf66
	I1128 00:05:25.464413   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:25.464427   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:25.464435   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:25.464444   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:25.464455   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:25 GMT
	I1128 00:05:25.464622   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:25.465060   28506 node_ready.go:58] node "multinode-883509" has status "Ready":"False"
	I1128 00:05:25.960207   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:25.960231   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:25.960242   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:25.960251   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:25.963088   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:25.963114   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:25.963123   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:25.963132   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:25.963139   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:25.963147   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:25.963154   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:25 GMT
	I1128 00:05:25.963162   28506 round_trippers.go:580]     Audit-Id: ab8e6f01-da7b-4e2f-996d-a3a66c3428c0
	I1128 00:05:25.963383   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:26.460016   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:26.460042   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:26.460050   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:26.460056   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:26.463007   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:26.463027   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:26.463034   28506 round_trippers.go:580]     Audit-Id: 28c5cd44-f40c-4cc0-961a-c746bcb94275
	I1128 00:05:26.463039   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:26.463045   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:26.463052   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:26.463058   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:26.463064   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:26 GMT
	I1128 00:05:26.463559   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"751","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1128 00:05:26.960214   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:26.960240   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:26.960248   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:26.960254   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:26.963967   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:26.963996   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:26.964006   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:26.964015   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:26.964032   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:26.964040   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:26.964047   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:26 GMT
	I1128 00:05:26.964057   28506 round_trippers.go:580]     Audit-Id: 59a8c123-e821-41d0-8126-fe2494334d34
	I1128 00:05:26.964453   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:26.964748   28506 node_ready.go:49] node "multinode-883509" has status "Ready":"True"
	I1128 00:05:26.964778   28506 node_ready.go:38] duration metric: took 5.788340972s waiting for node "multinode-883509" to be "Ready" ...
	I1128 00:05:26.964786   28506 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:05:26.964837   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:05:26.964846   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:26.964853   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:26.964859   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:26.968340   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:26.968362   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:26.968370   28506 round_trippers.go:580]     Audit-Id: 5963e8cc-db47-421a-b4b6-86be529e0cf4
	I1128 00:05:26.968379   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:26.968387   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:26.968397   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:26.968404   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:26.968422   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:26 GMT
	I1128 00:05:26.969929   28506 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"886"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82451 chars]
	I1128 00:05:26.972393   28506 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:26.972454   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:26.972462   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:26.972469   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:26.972475   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:26.974938   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:26.974959   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:26.974967   28506 round_trippers.go:580]     Audit-Id: 0950bf77-35f9-4e93-90dc-c945aefdfdf4
	I1128 00:05:26.974974   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:26.974982   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:26.974989   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:26.975007   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:26.975015   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:26 GMT
	I1128 00:05:26.975178   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:26.975702   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:26.975720   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:26.975727   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:26.975732   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:26.977723   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:26.977739   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:26.977752   28506 round_trippers.go:580]     Audit-Id: b22d6475-33ed-4bbc-b7cc-cc7b4b235d3e
	I1128 00:05:26.977759   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:26.977769   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:26.977787   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:26.977795   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:26.977807   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:26 GMT
	I1128 00:05:26.977975   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:26.978364   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:26.978378   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:26.978395   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:26.978409   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:26.980505   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:26.980525   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:26.980534   28506 round_trippers.go:580]     Audit-Id: c3d5f065-2920-46ac-a918-621c681d8c75
	I1128 00:05:26.980543   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:26.980560   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:26.980568   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:26.980579   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:26.980588   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:26 GMT
	I1128 00:05:26.980879   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:26.981415   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:26.981431   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:26.981438   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:26.981447   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:26.983459   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:26.983480   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:26.983488   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:26.983496   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:26.983504   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:26.983512   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:26.983520   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:26 GMT
	I1128 00:05:26.983528   28506 round_trippers.go:580]     Audit-Id: 043fe895-278b-4e4b-9525-8080104b5851
	I1128 00:05:26.983843   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:27.484688   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:27.484725   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:27.484733   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:27.484739   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:27.487964   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:27.487990   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:27.487997   28506 round_trippers.go:580]     Audit-Id: 7e57cf2b-8d63-4d51-8d1b-666d3f97ed9a
	I1128 00:05:27.488003   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:27.488008   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:27.488013   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:27.488019   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:27.488028   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:27 GMT
	I1128 00:05:27.488165   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:27.488611   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:27.488624   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:27.488635   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:27.488641   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:27.490853   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:27.490872   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:27.490878   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:27.490883   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:27 GMT
	I1128 00:05:27.490888   28506 round_trippers.go:580]     Audit-Id: 2d8f4e88-a6f0-48f4-813d-7b56e979d5ec
	I1128 00:05:27.490893   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:27.490898   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:27.490909   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:27.491374   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:27.985090   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:27.985114   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:27.985122   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:27.985127   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:27.988245   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:27.988261   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:27.988268   28506 round_trippers.go:580]     Audit-Id: 42922eab-72dd-40db-8f05-722f9a711605
	I1128 00:05:27.988276   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:27.988284   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:27.988292   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:27.988309   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:27.988317   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:27 GMT
	I1128 00:05:27.988457   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:27.988896   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:27.988909   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:27.988919   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:27.988924   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:27.992680   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:27.992701   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:27.992712   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:27.992720   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:27 GMT
	I1128 00:05:27.992729   28506 round_trippers.go:580]     Audit-Id: 8a2dc61c-8961-46e9-885e-22120a96f005
	I1128 00:05:27.992737   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:27.992743   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:27.992749   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:27.993777   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:28.485010   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:28.485042   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:28.485057   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:28.485067   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:28.488149   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:28.488170   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:28.488177   28506 round_trippers.go:580]     Audit-Id: 501b89b9-00c3-44a7-aded-253d0f5bbad7
	I1128 00:05:28.488183   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:28.488188   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:28.488193   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:28.488198   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:28.488210   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:28 GMT
	I1128 00:05:28.488635   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:28.489091   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:28.489105   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:28.489112   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:28.489118   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:28.491706   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:28.491728   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:28.491737   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:28.491745   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:28.491750   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:28 GMT
	I1128 00:05:28.491756   28506 round_trippers.go:580]     Audit-Id: 916f26cc-e194-4551-a269-250f0d37acdf
	I1128 00:05:28.491761   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:28.491765   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:28.492020   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:28.984931   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:28.984952   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:28.984960   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:28.984966   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:28.991108   28506 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1128 00:05:28.991133   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:28.991144   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:28.991154   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:28.991167   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:28.991177   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:28 GMT
	I1128 00:05:28.991183   28506 round_trippers.go:580]     Audit-Id: f1fb9c15-7164-4c48-aa61-e7e0a77ca99c
	I1128 00:05:28.991191   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:28.991342   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:28.991756   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:28.991767   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:28.991775   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:28.991780   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:29.004937   28506 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1128 00:05:29.004959   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:29.004965   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:28 GMT
	I1128 00:05:29.004973   28506 round_trippers.go:580]     Audit-Id: 31870b01-f798-4772-9786-d773e3a37da8
	I1128 00:05:29.004981   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:29.004990   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:29.004997   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:29.005008   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:29.005117   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:29.005438   28506 pod_ready.go:102] pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace has status "Ready":"False"
	I1128 00:05:29.484721   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:29.484746   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:29.484767   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:29.484773   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:29.487533   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:29.487551   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:29.487559   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:29.487568   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:29.487576   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:29 GMT
	I1128 00:05:29.487584   28506 round_trippers.go:580]     Audit-Id: bfc2175d-0e29-4fcf-8bb8-c733b58d4f6c
	I1128 00:05:29.487593   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:29.487601   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:29.487761   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:29.488200   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:29.488213   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:29.488220   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:29.488226   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:29.490570   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:29.490588   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:29.490597   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:29.490609   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:29.490616   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:29.490624   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:29 GMT
	I1128 00:05:29.490632   28506 round_trippers.go:580]     Audit-Id: f3e44aa4-fb4c-4996-91e0-1ad7c6578f0e
	I1128 00:05:29.490639   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:29.490797   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:29.984376   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:29.984402   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:29.984410   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:29.984416   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:29.988483   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:05:29.988502   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:29.988509   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:29.988514   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:29.988520   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:29.988525   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:29.988530   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:29 GMT
	I1128 00:05:29.988535   28506 round_trippers.go:580]     Audit-Id: babeb2bb-f3bc-44b8-8e17-f91684cda720
	I1128 00:05:29.988825   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:29.989252   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:29.989265   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:29.989272   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:29.989278   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:29.992208   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:29.992222   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:29.992228   28506 round_trippers.go:580]     Audit-Id: adc8093f-647a-4306-8891-1045215b6876
	I1128 00:05:29.992233   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:29.992238   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:29.992243   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:29.992252   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:29.992261   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:29 GMT
	I1128 00:05:29.992585   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:30.485352   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:30.485377   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:30.485387   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:30.485407   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:30.488669   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:30.488688   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:30.488695   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:30.488706   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:30 GMT
	I1128 00:05:30.488715   28506 round_trippers.go:580]     Audit-Id: 92d11ba7-a234-4eaa-9a10-4b9dfa73526b
	I1128 00:05:30.488722   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:30.488732   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:30.488744   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:30.489334   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:30.489744   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:30.489756   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:30.489763   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:30.489769   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:30.492463   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:30.492480   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:30.492491   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:30 GMT
	I1128 00:05:30.492500   28506 round_trippers.go:580]     Audit-Id: f81c7a65-e665-420f-b32d-05be0cdaf78d
	I1128 00:05:30.492508   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:30.492516   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:30.492526   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:30.492534   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:30.492765   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:30.984440   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:30.984466   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:30.984475   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:30.984481   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:30.989630   28506 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 00:05:30.989658   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:30.989671   28506 round_trippers.go:580]     Audit-Id: c02b194d-2c3d-41fd-975b-d1333c8f0b2d
	I1128 00:05:30.989680   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:30.989693   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:30.989706   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:30.989719   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:30.989739   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:30 GMT
	I1128 00:05:30.990157   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:30.990716   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:30.990733   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:30.990741   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:30.990750   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:30.993166   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:30.993188   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:30.993198   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:30.993209   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:30.993218   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:30.993225   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:30.993231   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:30 GMT
	I1128 00:05:30.993242   28506 round_trippers.go:580]     Audit-Id: 7317defb-57a1-4261-86bf-9e9c5dcf0ac9
	I1128 00:05:30.993399   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:31.485140   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:31.485181   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:31.485193   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:31.485203   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:31.488802   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:31.488824   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:31.488831   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:31 GMT
	I1128 00:05:31.488837   28506 round_trippers.go:580]     Audit-Id: 284cc6ba-0312-4161-8a2e-fac8fea3d352
	I1128 00:05:31.488842   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:31.488848   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:31.488853   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:31.488858   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:31.489111   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:31.489556   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:31.489570   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:31.489577   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:31.489583   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:31.491770   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:31.491792   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:31.491803   28506 round_trippers.go:580]     Audit-Id: ff5f87c2-3f22-4324-9c5b-3043cdb032ed
	I1128 00:05:31.491811   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:31.491819   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:31.491828   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:31.491836   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:31.491843   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:31 GMT
	I1128 00:05:31.492171   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:31.492432   28506 pod_ready.go:102] pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace has status "Ready":"False"
	I1128 00:05:31.984884   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:31.984912   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:31.984920   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:31.984930   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:31.988730   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:31.988748   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:31.988769   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:31.988781   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:31 GMT
	I1128 00:05:31.988793   28506 round_trippers.go:580]     Audit-Id: 3758d2b8-1b30-435a-9adc-31dd7a142b91
	I1128 00:05:31.988802   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:31.988808   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:31.988815   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:31.989992   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:31.990396   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:31.990408   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:31.990416   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:31.990422   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:31.993622   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:31.993643   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:31.993654   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:31.993660   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:31.993665   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:31 GMT
	I1128 00:05:31.993670   28506 round_trippers.go:580]     Audit-Id: 2df6c232-b0b1-43c4-87f1-e761b35b058e
	I1128 00:05:31.993681   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:31.993692   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:31.993833   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:32.484467   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:32.484495   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:32.484503   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:32.484512   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:32.488195   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:32.488218   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:32.488225   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:32 GMT
	I1128 00:05:32.488230   28506 round_trippers.go:580]     Audit-Id: 33cd251b-c095-4611-a878-d8d179ec20c8
	I1128 00:05:32.488235   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:32.488240   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:32.488245   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:32.488250   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:32.488383   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:32.489109   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:32.489132   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:32.489142   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:32.489150   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:32.491603   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:32.491626   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:32.491634   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:32.491639   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:32.491650   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:32.491655   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:32.491661   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:32 GMT
	I1128 00:05:32.491666   28506 round_trippers.go:580]     Audit-Id: 701e40fe-a64b-4b21-94cc-e77299e1d9df
	I1128 00:05:32.491967   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:32.984653   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:32.984680   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:32.984688   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:32.984695   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:32.989973   28506 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 00:05:32.989999   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:32.990010   28506 round_trippers.go:580]     Audit-Id: 1aac6f8f-a460-4d8b-9003-e629e99960ec
	I1128 00:05:32.990020   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:32.990030   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:32.990040   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:32.990050   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:32.990060   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:32 GMT
	I1128 00:05:32.991677   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:32.992263   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:32.992283   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:32.992295   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:32.992305   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:32.994315   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:32.994335   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:32.994345   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:32.994354   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:32 GMT
	I1128 00:05:32.994363   28506 round_trippers.go:580]     Audit-Id: 561f6bff-f429-4a54-bb95-4dcc69872873
	I1128 00:05:32.994370   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:32.994375   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:32.994381   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:32.994569   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:33.484889   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:33.484913   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:33.484924   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:33.484934   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:33.489069   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:05:33.489096   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:33.489106   28506 round_trippers.go:580]     Audit-Id: 0891ed04-9ce1-4e31-a477-79d40b8e7b4f
	I1128 00:05:33.489128   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:33.489138   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:33.489144   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:33.489156   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:33.489176   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:33 GMT
	I1128 00:05:33.489649   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"782","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1128 00:05:33.490145   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:33.490162   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:33.490170   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:33.490175   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:33.494148   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:33.494167   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:33.494177   28506 round_trippers.go:580]     Audit-Id: 0833ac96-59a1-47c9-8aa6-f0063b270604
	I1128 00:05:33.494184   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:33.494189   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:33.494194   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:33.494199   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:33.494204   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:33 GMT
	I1128 00:05:33.494984   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:33.495382   28506 pod_ready.go:102] pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace has status "Ready":"False"
	I1128 00:05:33.984459   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:05:33.984487   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:33.984500   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:33.984508   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:33.988356   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:33.988380   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:33.988392   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:33.988401   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:33.988410   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:33.988433   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:33.988446   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:33 GMT
	I1128 00:05:33.988452   28506 round_trippers.go:580]     Audit-Id: 65f3bc2d-f074-4a0c-97ec-8c5110ab2a93
	I1128 00:05:33.988977   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"910","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1128 00:05:33.989518   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:33.989535   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:33.989547   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:33.989556   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:33.993596   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:05:33.993614   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:33.993622   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:33.993631   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:33.993638   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:33.993655   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:33.993668   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:33 GMT
	I1128 00:05:33.993677   28506 round_trippers.go:580]     Audit-Id: e3607c61-4e7c-4b67-8ef3-7e69f4b977bf
	I1128 00:05:33.994669   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:33.995035   28506 pod_ready.go:92] pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:33.995058   28506 pod_ready.go:81] duration metric: took 7.022645269s waiting for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:33.995078   28506 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:33.995139   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1128 00:05:33.995150   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:33.995161   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:33.995172   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:33.997758   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:33.997772   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:33.997782   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:33.997788   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:33.997796   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:33.997804   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:33.997813   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:33 GMT
	I1128 00:05:33.997822   28506 round_trippers.go:580]     Audit-Id: a1ef96ae-2d60-40e9-b053-fd097bc2057f
	I1128 00:05:33.998136   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"887","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1128 00:05:33.998581   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:33.998596   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:33.998607   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:33.998616   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.001314   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:34.001330   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.001339   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.001348   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.001356   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.001364   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:33 GMT
	I1128 00:05:34.001372   28506 round_trippers.go:580]     Audit-Id: cf401b4e-cb22-4967-ba94-a6dcc8fbeaad
	I1128 00:05:34.001383   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.001475   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:34.001789   28506 pod_ready.go:92] pod "etcd-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:34.001808   28506 pod_ready.go:81] duration metric: took 6.720366ms waiting for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.001827   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.001879   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-883509
	I1128 00:05:34.001888   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.001898   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.001907   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.004557   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:34.004575   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.004585   28506 round_trippers.go:580]     Audit-Id: 43efcc92-0518-4740-a3a0-6c8406540ac9
	I1128 00:05:34.004594   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.004602   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.004613   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.004622   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.004629   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:33 GMT
	I1128 00:05:34.004805   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-883509","namespace":"kube-system","uid":"0a144c07-5db8-418a-ad15-110fabc7f377","resourceVersion":"880","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.159:8443","kubernetes.io/config.hash":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.mirror":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.seen":"2023-11-27T23:54:53.116543447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1128 00:05:34.005205   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:34.005216   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.005225   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.005237   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.007564   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:34.007579   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.007588   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.007596   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.007605   28506 round_trippers.go:580]     Audit-Id: 1f106eb8-3d2e-448b-a65d-dcda6342c0b0
	I1128 00:05:34.007612   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.007622   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.007635   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.007807   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:34.008177   28506 pod_ready.go:92] pod "kube-apiserver-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:34.008194   28506 pod_ready.go:81] duration metric: took 6.358437ms waiting for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.008207   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.008269   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-883509
	I1128 00:05:34.008279   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.008291   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.008303   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.010279   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:34.010301   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.010311   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.010321   28506 round_trippers.go:580]     Audit-Id: 83b65c06-3276-4cdd-9505-0b7d6a0b6590
	I1128 00:05:34.010330   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.010339   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.010350   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.010359   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.010509   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-883509","namespace":"kube-system","uid":"f8474e48-c333-4772-ae1f-59cdb2bf95eb","resourceVersion":"882","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.mirror":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.seen":"2023-11-27T23:54:53.116544230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1128 00:05:34.011050   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:34.011070   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.011081   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.011092   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.012792   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:34.012810   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.012820   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.012827   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.012836   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.012844   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.012857   28506 round_trippers.go:580]     Audit-Id: 59dcf7a1-a71d-4e2f-8242-bc44244885ae
	I1128 00:05:34.012866   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.013078   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:34.013455   28506 pod_ready.go:92] pod "kube-controller-manager-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:34.013475   28506 pod_ready.go:81] duration metric: took 5.257423ms waiting for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.013556   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6dvv4" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.013614   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:05:34.013626   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.013636   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.013641   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.015439   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:34.015456   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.015467   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.015475   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.015488   28506 round_trippers.go:580]     Audit-Id: f69b0696-9fac-423f-880a-938a24817a00
	I1128 00:05:34.015497   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.015510   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.015527   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.015697   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6dvv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c6651c7d-33a2-4a46-9d73-e60ee19557fa","resourceVersion":"726","creationTimestamp":"2023-11-27T23:56:37Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:56:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1128 00:05:34.016149   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:05:34.016166   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.016174   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.016184   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.018076   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:34.018089   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.018097   28506 round_trippers.go:580]     Audit-Id: 0adcd520-4a01-4222-aec8-3b68998de881
	I1128 00:05:34.018106   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.018113   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.018122   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.018131   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.018141   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.018320   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m03","uid":"2bc47ce6-2761-4c93-b9f7-cf65c531732f","resourceVersion":"891","creationTimestamp":"2023-11-27T23:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:57:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1128 00:05:34.018613   28506 pod_ready.go:92] pod "kube-proxy-6dvv4" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:34.018631   28506 pod_ready.go:81] duration metric: took 5.067959ms waiting for pod "kube-proxy-6dvv4" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.018640   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.185009   28506 request.go:629] Waited for 166.317946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1128 00:05:34.185085   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1128 00:05:34.185093   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.185104   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.185124   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.188785   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:34.188810   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.188821   28506 round_trippers.go:580]     Audit-Id: 0b824498-b006-49c8-b3a3-629c80eae51f
	I1128 00:05:34.188829   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.188837   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.188847   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.188855   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.188864   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.189172   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7g246","generateName":"kube-proxy-","namespace":"kube-system","uid":"c03a2053-f013-4269-a5e1-0acfebfc606c","resourceVersion":"810","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1128 00:05:34.385052   28506 request.go:629] Waited for 195.36302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:34.385140   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:34.385152   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.385163   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.385174   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.388310   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:34.388335   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.388354   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.388362   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.388371   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.388383   28506 round_trippers.go:580]     Audit-Id: 3c11d5db-2d50-40ee-889c-7101b9a3c2ed
	I1128 00:05:34.388408   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.388420   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.388812   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:34.389145   28506 pod_ready.go:92] pod "kube-proxy-7g246" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:34.389162   28506 pod_ready.go:81] duration metric: took 370.515266ms waiting for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.389173   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.584544   28506 request.go:629] Waited for 195.307766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1128 00:05:34.584631   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1128 00:05:34.584638   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.584646   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.584654   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.587751   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:34.587772   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.587782   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.587789   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.587797   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.587805   28506 round_trippers.go:580]     Audit-Id: 265c8f28-fc8f-4d42-821d-924c22edf9d1
	I1128 00:05:34.587814   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.587829   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.588008   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvsj6","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0e7a02e-868c-4774-885c-8b5ad728f451","resourceVersion":"519","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1128 00:05:34.784859   28506 request.go:629] Waited for 196.396735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:05:34.784951   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:05:34.784958   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.784965   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.784972   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.788016   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:34.788051   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.788062   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.788070   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.788078   28506 round_trippers.go:580]     Audit-Id: 492ff7e3-caf2-48df-a380-8c1869d7b6bd
	I1128 00:05:34.788085   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.788090   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.788095   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.788262   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61","resourceVersion":"783","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I1128 00:05:34.788549   28506 pod_ready.go:92] pod "kube-proxy-fvsj6" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:34.788566   28506 pod_ready.go:81] duration metric: took 399.3857ms waiting for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.788575   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:34.985008   28506 request.go:629] Waited for 196.360125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1128 00:05:34.985070   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1128 00:05:34.985077   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:34.985089   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:34.985102   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:34.989654   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:05:34.989684   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:34.989691   28506 round_trippers.go:580]     Audit-Id: d83ac5fd-a32b-4088-9515-2e52116e040d
	I1128 00:05:34.989697   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:34.989702   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:34.989707   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:34.989712   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:34.989717   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:34 GMT
	I1128 00:05:34.989922   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-883509","namespace":"kube-system","uid":"191f6a8c-7604-4f03-ba5a-d717b27f634b","resourceVersion":"902","creationTimestamp":"2023-11-27T23:54:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.mirror":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.seen":"2023-11-27T23:54:44.598174974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1128 00:05:35.185326   28506 request.go:629] Waited for 195.006441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:35.185421   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:05:35.185433   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:35.185445   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:35.185461   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:35.189080   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:05:35.189101   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:35.189108   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:35 GMT
	I1128 00:05:35.189113   28506 round_trippers.go:580]     Audit-Id: c1c512c8-f9a5-4d10-b18a-1707c9ec1d36
	I1128 00:05:35.189118   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:35.189126   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:35.189132   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:35.189137   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:35.189293   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1128 00:05:35.189630   28506 pod_ready.go:92] pod "kube-scheduler-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:05:35.189645   28506 pod_ready.go:81] duration metric: took 401.064765ms waiting for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:05:35.189654   28506 pod_ready.go:38] duration metric: took 8.224860794s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:05:35.189667   28506 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:05:35.189724   28506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:05:35.202322   28506 command_runner.go:130] > 1107
	I1128 00:05:35.202706   28506 api_server.go:72] duration metric: took 14.118457345s to wait for apiserver process to appear ...
	I1128 00:05:35.202721   28506 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:05:35.202733   28506 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1128 00:05:35.207688   28506 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1128 00:05:35.207762   28506 round_trippers.go:463] GET https://192.168.39.159:8443/version
	I1128 00:05:35.207772   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:35.207780   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:35.207785   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:35.209092   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:05:35.211312   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:35.211324   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:35.211334   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:35.211345   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:35.211357   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:35.211369   28506 round_trippers.go:580]     Content-Length: 264
	I1128 00:05:35.211378   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:35 GMT
	I1128 00:05:35.211384   28506 round_trippers.go:580]     Audit-Id: 672e48c9-dd22-4742-827f-568e6a1910aa
	I1128 00:05:35.211434   28506 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1128 00:05:35.211497   28506 api_server.go:141] control plane version: v1.28.4
	I1128 00:05:35.211518   28506 api_server.go:131] duration metric: took 8.791114ms to wait for apiserver health ...
	I1128 00:05:35.211527   28506 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:05:35.384702   28506 request.go:629] Waited for 173.081478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:05:35.384797   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:05:35.384804   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:35.384812   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:35.384818   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:35.391985   28506 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1128 00:05:35.392007   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:35.392016   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:35.392025   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:35.392032   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:35 GMT
	I1128 00:05:35.392047   28506 round_trippers.go:580]     Audit-Id: 32f87453-0335-4613-baff-cf5b18262260
	I1128 00:05:35.392055   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:35.392063   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:35.395348   28506 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"914"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"910","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I1128 00:05:35.398534   28506 system_pods.go:59] 12 kube-system pods found
	I1128 00:05:35.398561   28506 system_pods.go:61] "coredns-5dd5756b68-9vws5" [66ac3c18-9997-49aa-a154-ade69c138f12] Running
	I1128 00:05:35.398568   28506 system_pods.go:61] "etcd-multinode-883509" [58bb8943-0a7c-4d4c-a090-ea8de587f504] Running
	I1128 00:05:35.398578   28506 system_pods.go:61] "kindnet-t4wlq" [ab1a3a4e-2d8d-49cd-bbe5-1e52fa0b4350] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 00:05:35.398587   28506 system_pods.go:61] "kindnet-xtnn9" [f708ed6f-b1dd-4fb5-9e07-15fcd79c82c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 00:05:35.398612   28506 system_pods.go:61] "kindnet-ztt77" [acbfe061-9a56-4999-baed-ef8d73dc9222] Running
	I1128 00:05:35.398621   28506 system_pods.go:61] "kube-apiserver-multinode-883509" [0a144c07-5db8-418a-ad15-110fabc7f377] Running
	I1128 00:05:35.398631   28506 system_pods.go:61] "kube-controller-manager-multinode-883509" [f8474e48-c333-4772-ae1f-59cdb2bf95eb] Running
	I1128 00:05:35.398638   28506 system_pods.go:61] "kube-proxy-6dvv4" [c6651c7d-33a2-4a46-9d73-e60ee19557fa] Running
	I1128 00:05:35.398647   28506 system_pods.go:61] "kube-proxy-7g246" [c03a2053-f013-4269-a5e1-0acfebfc606c] Running
	I1128 00:05:35.398654   28506 system_pods.go:61] "kube-proxy-fvsj6" [d0e7a02e-868c-4774-885c-8b5ad728f451] Running
	I1128 00:05:35.398663   28506 system_pods.go:61] "kube-scheduler-multinode-883509" [191f6a8c-7604-4f03-ba5a-d717b27f634b] Running
	I1128 00:05:35.398670   28506 system_pods.go:61] "storage-provisioner" [e59cdfcb-f7c6-4be9-a2e1-0931d582343c] Running
	I1128 00:05:35.398711   28506 system_pods.go:74] duration metric: took 187.153376ms to wait for pod list to return data ...
	I1128 00:05:35.398730   28506 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:05:35.585185   28506 request.go:629] Waited for 186.389179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I1128 00:05:35.585259   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/default/serviceaccounts
	I1128 00:05:35.585264   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:35.585272   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:35.585278   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:35.588079   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:05:35.588106   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:35.588116   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:35.588124   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:35.588132   28506 round_trippers.go:580]     Content-Length: 261
	I1128 00:05:35.588140   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:35 GMT
	I1128 00:05:35.588151   28506 round_trippers.go:580]     Audit-Id: 0f40e324-9a8b-4ac0-a5f5-40f38d496ecf
	I1128 00:05:35.588158   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:35.588168   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:35.588222   28506 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"914"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"da7f4830-e8a5-4bf2-be22-fac9b3c7bd70","resourceVersion":"358","creationTimestamp":"2023-11-27T23:55:05Z"}}]}
	I1128 00:05:35.588405   28506 default_sa.go:45] found service account: "default"
	I1128 00:05:35.588420   28506 default_sa.go:55] duration metric: took 189.684728ms for default service account to be created ...
	I1128 00:05:35.588429   28506 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:05:35.784866   28506 request.go:629] Waited for 196.379886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:05:35.784929   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:05:35.784934   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:35.784941   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:35.784947   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:35.790906   28506 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 00:05:35.790924   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:35.790930   28506 round_trippers.go:580]     Audit-Id: 4ea49c87-81da-4d56-8855-2c6ca98d6707
	I1128 00:05:35.790936   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:35.790941   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:35.790945   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:35.790951   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:35.790955   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:35 GMT
	I1128 00:05:35.791915   28506 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"914"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"910","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I1128 00:05:35.794265   28506 system_pods.go:86] 12 kube-system pods found
	I1128 00:05:35.794286   28506 system_pods.go:89] "coredns-5dd5756b68-9vws5" [66ac3c18-9997-49aa-a154-ade69c138f12] Running
	I1128 00:05:35.794291   28506 system_pods.go:89] "etcd-multinode-883509" [58bb8943-0a7c-4d4c-a090-ea8de587f504] Running
	I1128 00:05:35.794298   28506 system_pods.go:89] "kindnet-t4wlq" [ab1a3a4e-2d8d-49cd-bbe5-1e52fa0b4350] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 00:05:35.794304   28506 system_pods.go:89] "kindnet-xtnn9" [f708ed6f-b1dd-4fb5-9e07-15fcd79c82c5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 00:05:35.794309   28506 system_pods.go:89] "kindnet-ztt77" [acbfe061-9a56-4999-baed-ef8d73dc9222] Running
	I1128 00:05:35.794313   28506 system_pods.go:89] "kube-apiserver-multinode-883509" [0a144c07-5db8-418a-ad15-110fabc7f377] Running
	I1128 00:05:35.794317   28506 system_pods.go:89] "kube-controller-manager-multinode-883509" [f8474e48-c333-4772-ae1f-59cdb2bf95eb] Running
	I1128 00:05:35.794321   28506 system_pods.go:89] "kube-proxy-6dvv4" [c6651c7d-33a2-4a46-9d73-e60ee19557fa] Running
	I1128 00:05:35.794325   28506 system_pods.go:89] "kube-proxy-7g246" [c03a2053-f013-4269-a5e1-0acfebfc606c] Running
	I1128 00:05:35.794328   28506 system_pods.go:89] "kube-proxy-fvsj6" [d0e7a02e-868c-4774-885c-8b5ad728f451] Running
	I1128 00:05:35.794332   28506 system_pods.go:89] "kube-scheduler-multinode-883509" [191f6a8c-7604-4f03-ba5a-d717b27f634b] Running
	I1128 00:05:35.794336   28506 system_pods.go:89] "storage-provisioner" [e59cdfcb-f7c6-4be9-a2e1-0931d582343c] Running
	I1128 00:05:35.794342   28506 system_pods.go:126] duration metric: took 205.905557ms to wait for k8s-apps to be running ...
	I1128 00:05:35.794348   28506 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:05:35.794389   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:05:35.816821   28506 system_svc.go:56] duration metric: took 22.464484ms WaitForService to wait for kubelet.
	I1128 00:05:35.816844   28506 kubeadm.go:581] duration metric: took 14.73259841s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:05:35.816861   28506 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:05:35.985279   28506 request.go:629] Waited for 168.34893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I1128 00:05:35.985342   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I1128 00:05:35.985348   28506 round_trippers.go:469] Request Headers:
	I1128 00:05:35.985355   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:05:35.985365   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:05:35.989857   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:05:35.989877   28506 round_trippers.go:577] Response Headers:
	I1128 00:05:35.989885   28506 round_trippers.go:580]     Audit-Id: 6f4f08fb-a5fe-41e8-982c-84e006c3908e
	I1128 00:05:35.989906   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:05:35.989915   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:05:35.989925   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:05:35.989934   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:05:35.989953   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:05:35 GMT
	I1128 00:05:35.990307   28506 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"914"},"items":[{"metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"884","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15075 chars]
	I1128 00:05:35.990919   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:05:35.990960   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:05:35.990973   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:05:35.990978   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:05:35.990984   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:05:35.990987   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:05:35.990993   28506 node_conditions.go:105] duration metric: took 174.128692ms to run NodePressure ...
	I1128 00:05:35.991003   28506 start.go:228] waiting for startup goroutines ...
	I1128 00:05:35.991018   28506 start.go:233] waiting for cluster config update ...
	I1128 00:05:35.991024   28506 start.go:242] writing updated cluster config ...
	I1128 00:05:35.991468   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:05:35.991590   28506 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1128 00:05:35.995023   28506 out.go:177] * Starting worker node multinode-883509-m02 in cluster multinode-883509
	I1128 00:05:35.996439   28506 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:05:35.996463   28506 cache.go:56] Caching tarball of preloaded images
	I1128 00:05:35.996553   28506 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 00:05:35.996568   28506 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 00:05:35.996686   28506 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1128 00:05:35.996928   28506 start.go:365] acquiring machines lock for multinode-883509-m02: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:05:35.996981   28506 start.go:369] acquired machines lock for "multinode-883509-m02" in 28.379µs
	I1128 00:05:35.996999   28506 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:05:35.997011   28506 fix.go:54] fixHost starting: m02
	I1128 00:05:35.997261   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:05:35.997282   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:05:36.012026   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I1128 00:05:36.012447   28506 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:05:36.013005   28506 main.go:141] libmachine: Using API Version  1
	I1128 00:05:36.013047   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:05:36.013402   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:05:36.013588   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1128 00:05:36.013718   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetState
	I1128 00:05:36.015458   28506 fix.go:102] recreateIfNeeded on multinode-883509-m02: state=Running err=<nil>
	W1128 00:05:36.015477   28506 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:05:36.017699   28506 out.go:177] * Updating the running kvm2 "multinode-883509-m02" VM ...
	I1128 00:05:36.019285   28506 machine.go:88] provisioning docker machine ...
	I1128 00:05:36.019308   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1128 00:05:36.019528   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetMachineName
	I1128 00:05:36.019695   28506 buildroot.go:166] provisioning hostname "multinode-883509-m02"
	I1128 00:05:36.019714   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetMachineName
	I1128 00:05:36.019870   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1128 00:05:36.022318   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.022846   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:05:36.022876   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.023051   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1128 00:05:36.023221   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:05:36.023377   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:05:36.023539   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1128 00:05:36.023767   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:05:36.024088   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1128 00:05:36.024103   28506 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-883509-m02 && echo "multinode-883509-m02" | sudo tee /etc/hostname
	I1128 00:05:36.163366   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-883509-m02
	
	I1128 00:05:36.163394   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1128 00:05:36.166799   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.167111   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:05:36.167142   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.167383   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1128 00:05:36.167588   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:05:36.167759   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:05:36.167963   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1128 00:05:36.168163   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:05:36.168686   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1128 00:05:36.168709   28506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-883509-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-883509-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-883509-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:05:36.283231   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:05:36.283261   28506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:05:36.283279   28506 buildroot.go:174] setting up certificates
	I1128 00:05:36.283288   28506 provision.go:83] configureAuth start
	I1128 00:05:36.283300   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetMachineName
	I1128 00:05:36.283567   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetIP
	I1128 00:05:36.286291   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.286588   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:05:36.286624   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.286805   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1128 00:05:36.289361   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.289795   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:05:36.289825   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.289936   28506 provision.go:138] copyHostCerts
	I1128 00:05:36.289966   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:05:36.289998   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:05:36.290015   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:05:36.290094   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:05:36.290218   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:05:36.290245   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:05:36.290252   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:05:36.290299   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:05:36.290362   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:05:36.290384   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:05:36.290394   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:05:36.290426   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:05:36.290515   28506 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.multinode-883509-m02 san=[192.168.39.97 192.168.39.97 localhost 127.0.0.1 minikube multinode-883509-m02]
	I1128 00:05:36.467691   28506 provision.go:172] copyRemoteCerts
	I1128 00:05:36.467758   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:05:36.467786   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1128 00:05:36.470672   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.471070   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:05:36.471098   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.471275   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1128 00:05:36.471488   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:05:36.471663   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1128 00:05:36.471839   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1128 00:05:36.562916   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 00:05:36.562983   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:05:36.587394   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 00:05:36.587465   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1128 00:05:36.613146   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 00:05:36.613213   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:05:36.637812   28506 provision.go:86] duration metric: configureAuth took 354.514271ms
	I1128 00:05:36.637838   28506 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:05:36.638082   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:05:36.638167   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1128 00:05:36.641162   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.641568   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:05:36.641594   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:05:36.641809   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1128 00:05:36.641996   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:05:36.642172   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:05:36.642317   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1128 00:05:36.642488   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:05:36.642805   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1128 00:05:36.642828   28506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:07:07.330108   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:07:07.330133   28506 machine.go:91] provisioned docker machine in 1m31.310831056s
	I1128 00:07:07.330144   28506 start.go:300] post-start starting for "multinode-883509-m02" (driver="kvm2")
	I1128 00:07:07.330153   28506 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:07:07.330166   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1128 00:07:07.330482   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:07:07.330517   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1128 00:07:07.332999   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.333392   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:07:07.333428   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.333644   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1128 00:07:07.333828   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:07:07.333980   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1128 00:07:07.334122   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1128 00:07:07.423327   28506 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:07:07.427467   28506 command_runner.go:130] > NAME=Buildroot
	I1128 00:07:07.427490   28506 command_runner.go:130] > VERSION=2021.02.12-1-g8be4f20-dirty
	I1128 00:07:07.427497   28506 command_runner.go:130] > ID=buildroot
	I1128 00:07:07.427503   28506 command_runner.go:130] > VERSION_ID=2021.02.12
	I1128 00:07:07.427507   28506 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1128 00:07:07.427665   28506 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:07:07.427688   28506 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:07:07.427748   28506 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:07:07.427838   28506 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:07:07.427852   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /etc/ssl/certs/119302.pem
	I1128 00:07:07.427960   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:07:07.436402   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:07:07.459202   28506 start.go:303] post-start completed in 129.047422ms
	I1128 00:07:07.459224   28506 fix.go:56] fixHost completed within 1m31.462211248s
	I1128 00:07:07.459250   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1128 00:07:07.461558   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.461929   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:07:07.461959   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.462136   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1128 00:07:07.462321   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:07:07.462473   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:07:07.462621   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1128 00:07:07.462803   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:07:07.463163   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I1128 00:07:07.463176   28506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:07:07.577832   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701130027.569096591
	
	I1128 00:07:07.577857   28506 fix.go:206] guest clock: 1701130027.569096591
	I1128 00:07:07.577864   28506 fix.go:219] Guest: 2023-11-28 00:07:07.569096591 +0000 UTC Remote: 2023-11-28 00:07:07.459228878 +0000 UTC m=+452.298436594 (delta=109.867713ms)
	I1128 00:07:07.577879   28506 fix.go:190] guest clock delta is within tolerance: 109.867713ms
	I1128 00:07:07.577885   28506 start.go:83] releasing machines lock for "multinode-883509-m02", held for 1m31.580891991s
	I1128 00:07:07.577911   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1128 00:07:07.578138   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetIP
	I1128 00:07:07.580509   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.580898   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:07:07.580923   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.583025   28506 out.go:177] * Found network options:
	I1128 00:07:07.584489   28506 out.go:177]   - NO_PROXY=192.168.39.159
	W1128 00:07:07.585866   28506 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 00:07:07.585918   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1128 00:07:07.586500   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1128 00:07:07.586732   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1128 00:07:07.586830   28506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:07:07.586874   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	W1128 00:07:07.586961   28506 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 00:07:07.587035   28506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:07:07.587055   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1128 00:07:07.589740   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.590012   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.590139   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:07:07.590166   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.590346   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1128 00:07:07.590375   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:07:07.590404   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:07:07.590533   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:07:07.590547   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1128 00:07:07.590708   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1128 00:07:07.590708   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1128 00:07:07.590855   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1128 00:07:07.590857   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1128 00:07:07.590975   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1128 00:07:07.822588   28506 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 00:07:07.822618   28506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 00:07:07.828459   28506 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1128 00:07:07.828580   28506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:07:07.828641   28506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:07:07.837630   28506 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 00:07:07.837647   28506 start.go:472] detecting cgroup driver to use...
	I1128 00:07:07.837695   28506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:07:07.852468   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:07:07.864806   28506 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:07:07.864850   28506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:07:07.878872   28506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:07:07.891933   28506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:07:08.037113   28506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:07:08.167999   28506 docker.go:219] disabling docker service ...
	I1128 00:07:08.168066   28506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:07:08.183911   28506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:07:08.196124   28506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:07:08.359664   28506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:07:08.528402   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:07:08.540952   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:07:08.560085   28506 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 00:07:08.560134   28506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:07:08.560191   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:07:08.569719   28506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:07:08.569797   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:07:08.579015   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:07:08.588404   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:07:08.597996   28506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:07:08.608402   28506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:07:08.617161   28506 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1128 00:07:08.617359   28506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:07:08.626535   28506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:07:08.780321   28506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:08:41.762094   28506 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m32.981723656s)
	I1128 00:08:41.762120   28506 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:08:41.762175   28506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:08:41.767795   28506 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 00:08:41.767823   28506 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 00:08:41.767837   28506 command_runner.go:130] > Device: 16h/22d	Inode: 1233        Links: 1
	I1128 00:08:41.767848   28506 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 00:08:41.767857   28506 command_runner.go:130] > Access: 2023-11-28 00:08:41.722527902 +0000
	I1128 00:08:41.767868   28506 command_runner.go:130] > Modify: 2023-11-28 00:08:41.660523026 +0000
	I1128 00:08:41.767874   28506 command_runner.go:130] > Change: 2023-11-28 00:08:41.660523026 +0000
	I1128 00:08:41.767881   28506 command_runner.go:130] >  Birth: -
	I1128 00:08:41.767897   28506 start.go:540] Will wait 60s for crictl version
	I1128 00:08:41.767938   28506 ssh_runner.go:195] Run: which crictl
	I1128 00:08:41.771562   28506 command_runner.go:130] > /usr/bin/crictl
	I1128 00:08:41.771802   28506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:08:41.815037   28506 command_runner.go:130] > Version:  0.1.0
	I1128 00:08:41.815063   28506 command_runner.go:130] > RuntimeName:  cri-o
	I1128 00:08:41.815124   28506 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1128 00:08:41.815475   28506 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 00:08:41.817138   28506 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:08:41.817209   28506 ssh_runner.go:195] Run: crio --version
	I1128 00:08:41.866622   28506 command_runner.go:130] > crio version 1.24.1
	I1128 00:08:41.866643   28506 command_runner.go:130] > Version:          1.24.1
	I1128 00:08:41.866651   28506 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 00:08:41.866655   28506 command_runner.go:130] > GitTreeState:     dirty
	I1128 00:08:41.866662   28506 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1128 00:08:41.866667   28506 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 00:08:41.866671   28506 command_runner.go:130] > Compiler:         gc
	I1128 00:08:41.866675   28506 command_runner.go:130] > Platform:         linux/amd64
	I1128 00:08:41.866681   28506 command_runner.go:130] > Linkmode:         dynamic
	I1128 00:08:41.866688   28506 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 00:08:41.866692   28506 command_runner.go:130] > SeccompEnabled:   true
	I1128 00:08:41.866696   28506 command_runner.go:130] > AppArmorEnabled:  false
	I1128 00:08:41.868284   28506 ssh_runner.go:195] Run: crio --version
	I1128 00:08:41.926732   28506 command_runner.go:130] > crio version 1.24.1
	I1128 00:08:41.926761   28506 command_runner.go:130] > Version:          1.24.1
	I1128 00:08:41.926773   28506 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 00:08:41.926780   28506 command_runner.go:130] > GitTreeState:     dirty
	I1128 00:08:41.926790   28506 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1128 00:08:41.926797   28506 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 00:08:41.926802   28506 command_runner.go:130] > Compiler:         gc
	I1128 00:08:41.926809   28506 command_runner.go:130] > Platform:         linux/amd64
	I1128 00:08:41.926821   28506 command_runner.go:130] > Linkmode:         dynamic
	I1128 00:08:41.926837   28506 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 00:08:41.926844   28506 command_runner.go:130] > SeccompEnabled:   true
	I1128 00:08:41.926854   28506 command_runner.go:130] > AppArmorEnabled:  false
	I1128 00:08:41.928783   28506 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:08:41.930195   28506 out.go:177]   - env NO_PROXY=192.168.39.159
	I1128 00:08:41.931490   28506 main.go:141] libmachine: (multinode-883509-m02) Calling .GetIP
	I1128 00:08:41.934117   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:08:41.934506   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1128 00:08:41.934530   28506 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1128 00:08:41.934805   28506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:08:41.939258   28506 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1128 00:08:41.939565   28506 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509 for IP: 192.168.39.97
	I1128 00:08:41.939585   28506 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:08:41.939750   28506 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:08:41.939809   28506 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:08:41.939827   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 00:08:41.939847   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 00:08:41.939865   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 00:08:41.939881   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 00:08:41.939951   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:08:41.939992   28506 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:08:41.940006   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:08:41.940036   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:08:41.940059   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:08:41.940080   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:08:41.940120   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:08:41.940146   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:08:41.940158   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem -> /usr/share/ca-certificates/11930.pem
	I1128 00:08:41.940170   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /usr/share/ca-certificates/119302.pem
	I1128 00:08:41.940504   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:08:41.963677   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:08:41.985366   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:08:42.007376   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:08:42.028585   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:08:42.049538   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:08:42.070191   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:08:42.091249   28506 ssh_runner.go:195] Run: openssl version
	I1128 00:08:42.096443   28506 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1128 00:08:42.096614   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:08:42.107001   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:08:42.111562   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:08:42.111645   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:08:42.111697   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:08:42.116791   28506 command_runner.go:130] > 51391683
	I1128 00:08:42.116844   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:08:42.125575   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:08:42.135706   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:08:42.140084   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:08:42.140235   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:08:42.140280   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:08:42.145335   28506 command_runner.go:130] > 3ec20f2e
	I1128 00:08:42.145389   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:08:42.153962   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:08:42.164726   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:08:42.168730   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:08:42.168927   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:08:42.168981   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:08:42.174552   28506 command_runner.go:130] > b5213941
	I1128 00:08:42.174622   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:08:42.183677   28506 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:08:42.187194   28506 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 00:08:42.187390   28506 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 00:08:42.187480   28506 ssh_runner.go:195] Run: crio config
	I1128 00:08:42.245286   28506 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 00:08:42.245307   28506 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 00:08:42.245314   28506 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 00:08:42.245318   28506 command_runner.go:130] > #
	I1128 00:08:42.245325   28506 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 00:08:42.245335   28506 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 00:08:42.245347   28506 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 00:08:42.245362   28506 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 00:08:42.245372   28506 command_runner.go:130] > # reload'.
	I1128 00:08:42.245382   28506 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 00:08:42.245394   28506 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 00:08:42.245423   28506 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 00:08:42.245434   28506 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 00:08:42.245437   28506 command_runner.go:130] > [crio]
	I1128 00:08:42.245443   28506 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 00:08:42.245448   28506 command_runner.go:130] > # containers images, in this directory.
	I1128 00:08:42.245456   28506 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1128 00:08:42.245470   28506 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 00:08:42.245477   28506 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1128 00:08:42.245483   28506 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 00:08:42.245491   28506 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 00:08:42.245496   28506 command_runner.go:130] > storage_driver = "overlay"
	I1128 00:08:42.245504   28506 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 00:08:42.245510   28506 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 00:08:42.245514   28506 command_runner.go:130] > storage_option = [
	I1128 00:08:42.245521   28506 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1128 00:08:42.245524   28506 command_runner.go:130] > ]
	I1128 00:08:42.245531   28506 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 00:08:42.245537   28506 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 00:08:42.245544   28506 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 00:08:42.245550   28506 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 00:08:42.245558   28506 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 00:08:42.245563   28506 command_runner.go:130] > # always happen on a node reboot
	I1128 00:08:42.245570   28506 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 00:08:42.245576   28506 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 00:08:42.245583   28506 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 00:08:42.245592   28506 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 00:08:42.245603   28506 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 00:08:42.245615   28506 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 00:08:42.245630   28506 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 00:08:42.245640   28506 command_runner.go:130] > # internal_wipe = true
	I1128 00:08:42.245649   28506 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 00:08:42.245664   28506 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 00:08:42.245672   28506 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 00:08:42.245681   28506 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 00:08:42.245691   28506 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 00:08:42.245699   28506 command_runner.go:130] > [crio.api]
	I1128 00:08:42.245708   28506 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 00:08:42.245720   28506 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 00:08:42.245728   28506 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 00:08:42.245733   28506 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 00:08:42.245739   28506 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 00:08:42.245747   28506 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 00:08:42.245752   28506 command_runner.go:130] > # stream_port = "0"
	I1128 00:08:42.245762   28506 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 00:08:42.245773   28506 command_runner.go:130] > # stream_enable_tls = false
	I1128 00:08:42.245784   28506 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 00:08:42.245795   28506 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 00:08:42.245806   28506 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 00:08:42.245817   28506 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 00:08:42.245824   28506 command_runner.go:130] > # minutes.
	I1128 00:08:42.245830   28506 command_runner.go:130] > # stream_tls_cert = ""
	I1128 00:08:42.245843   28506 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 00:08:42.245857   28506 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 00:08:42.245865   28506 command_runner.go:130] > # stream_tls_key = ""
	I1128 00:08:42.245875   28506 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 00:08:42.245888   28506 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 00:08:42.245898   28506 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 00:08:42.245905   28506 command_runner.go:130] > # stream_tls_ca = ""
	I1128 00:08:42.245913   28506 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 00:08:42.245920   28506 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1128 00:08:42.245927   28506 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 00:08:42.245934   28506 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1128 00:08:42.245970   28506 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 00:08:42.246011   28506 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 00:08:42.246024   28506 command_runner.go:130] > [crio.runtime]
	I1128 00:08:42.246034   28506 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 00:08:42.246044   28506 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 00:08:42.246057   28506 command_runner.go:130] > # "nofile=1024:2048"
	I1128 00:08:42.246075   28506 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 00:08:42.246082   28506 command_runner.go:130] > # default_ulimits = [
	I1128 00:08:42.246088   28506 command_runner.go:130] > # ]
	I1128 00:08:42.246099   28506 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 00:08:42.246110   28506 command_runner.go:130] > # no_pivot = false
	I1128 00:08:42.246121   28506 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 00:08:42.246135   28506 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 00:08:42.246148   28506 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 00:08:42.246162   28506 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 00:08:42.246172   28506 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 00:08:42.246181   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 00:08:42.246189   28506 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1128 00:08:42.246196   28506 command_runner.go:130] > # Cgroup setting for conmon
	I1128 00:08:42.246208   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 00:08:42.246219   28506 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 00:08:42.246229   28506 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 00:08:42.246239   28506 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 00:08:42.246254   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 00:08:42.246264   28506 command_runner.go:130] > conmon_env = [
	I1128 00:08:42.246276   28506 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1128 00:08:42.246284   28506 command_runner.go:130] > ]
	I1128 00:08:42.246294   28506 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 00:08:42.246303   28506 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 00:08:42.246317   28506 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 00:08:42.246325   28506 command_runner.go:130] > # default_env = [
	I1128 00:08:42.246335   28506 command_runner.go:130] > # ]
	I1128 00:08:42.246345   28506 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 00:08:42.246356   28506 command_runner.go:130] > # selinux = false
	I1128 00:08:42.246370   28506 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 00:08:42.246385   28506 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 00:08:42.246396   28506 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 00:08:42.246407   28506 command_runner.go:130] > # seccomp_profile = ""
	I1128 00:08:42.246420   28506 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 00:08:42.246434   28506 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 00:08:42.246448   28506 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 00:08:42.246458   28506 command_runner.go:130] > # which might increase security.
	I1128 00:08:42.246470   28506 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1128 00:08:42.246484   28506 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 00:08:42.246494   28506 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 00:08:42.246501   28506 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 00:08:42.246511   28506 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 00:08:42.246523   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:08:42.246531   28506 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 00:08:42.246544   28506 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 00:08:42.246554   28506 command_runner.go:130] > # the cgroup blockio controller.
	I1128 00:08:42.246562   28506 command_runner.go:130] > # blockio_config_file = ""
	I1128 00:08:42.246573   28506 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 00:08:42.246584   28506 command_runner.go:130] > # irqbalance daemon.
	I1128 00:08:42.246595   28506 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 00:08:42.246609   28506 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 00:08:42.246621   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:08:42.246628   28506 command_runner.go:130] > # rdt_config_file = ""
	I1128 00:08:42.246640   28506 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 00:08:42.246651   28506 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 00:08:42.246663   28506 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 00:08:42.246670   28506 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 00:08:42.246677   28506 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 00:08:42.246689   28506 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 00:08:42.246698   28506 command_runner.go:130] > # will be added.
	I1128 00:08:42.246706   28506 command_runner.go:130] > # default_capabilities = [
	I1128 00:08:42.246716   28506 command_runner.go:130] > # 	"CHOWN",
	I1128 00:08:42.246723   28506 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 00:08:42.246733   28506 command_runner.go:130] > # 	"FSETID",
	I1128 00:08:42.246740   28506 command_runner.go:130] > # 	"FOWNER",
	I1128 00:08:42.246749   28506 command_runner.go:130] > # 	"SETGID",
	I1128 00:08:42.246755   28506 command_runner.go:130] > # 	"SETUID",
	I1128 00:08:42.246762   28506 command_runner.go:130] > # 	"SETPCAP",
	I1128 00:08:42.246766   28506 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 00:08:42.246772   28506 command_runner.go:130] > # 	"KILL",
	I1128 00:08:42.246775   28506 command_runner.go:130] > # ]
	I1128 00:08:42.246782   28506 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 00:08:42.246791   28506 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 00:08:42.246795   28506 command_runner.go:130] > # default_sysctls = [
	I1128 00:08:42.246802   28506 command_runner.go:130] > # ]
	I1128 00:08:42.246806   28506 command_runner.go:130] > # List of devices on the host that a
	I1128 00:08:42.246812   28506 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 00:08:42.246818   28506 command_runner.go:130] > # allowed_devices = [
	I1128 00:08:42.246822   28506 command_runner.go:130] > # 	"/dev/fuse",
	I1128 00:08:42.246827   28506 command_runner.go:130] > # ]
	I1128 00:08:42.246835   28506 command_runner.go:130] > # List of additional devices. specified as
	I1128 00:08:42.246851   28506 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 00:08:42.246864   28506 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 00:08:42.246918   28506 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 00:08:42.246930   28506 command_runner.go:130] > # additional_devices = [
	I1128 00:08:42.246936   28506 command_runner.go:130] > # ]
	I1128 00:08:42.246949   28506 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 00:08:42.246959   28506 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 00:08:42.246979   28506 command_runner.go:130] > # 	"/etc/cdi",
	I1128 00:08:42.246986   28506 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 00:08:42.246992   28506 command_runner.go:130] > # ]
	I1128 00:08:42.246999   28506 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 00:08:42.247007   28506 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 00:08:42.247011   28506 command_runner.go:130] > # Defaults to false.
	I1128 00:08:42.247017   28506 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 00:08:42.247024   28506 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 00:08:42.247032   28506 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 00:08:42.247036   28506 command_runner.go:130] > # hooks_dir = [
	I1128 00:08:42.247042   28506 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 00:08:42.247045   28506 command_runner.go:130] > # ]
	I1128 00:08:42.247051   28506 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 00:08:42.247059   28506 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 00:08:42.247065   28506 command_runner.go:130] > # its default mounts from the following two files:
	I1128 00:08:42.247070   28506 command_runner.go:130] > #
	I1128 00:08:42.247076   28506 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 00:08:42.247083   28506 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 00:08:42.247095   28506 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 00:08:42.247098   28506 command_runner.go:130] > #
	I1128 00:08:42.247105   28506 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 00:08:42.247113   28506 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 00:08:42.247120   28506 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 00:08:42.247125   28506 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 00:08:42.247130   28506 command_runner.go:130] > #
	I1128 00:08:42.247135   28506 command_runner.go:130] > # default_mounts_file = ""
	I1128 00:08:42.247140   28506 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 00:08:42.247150   28506 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 00:08:42.247154   28506 command_runner.go:130] > pids_limit = 1024
	I1128 00:08:42.247161   28506 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 00:08:42.247167   28506 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 00:08:42.247175   28506 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 00:08:42.247185   28506 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 00:08:42.247191   28506 command_runner.go:130] > # log_size_max = -1
	I1128 00:08:42.247198   28506 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 00:08:42.247204   28506 command_runner.go:130] > # log_to_journald = false
	I1128 00:08:42.247210   28506 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 00:08:42.247215   28506 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 00:08:42.247224   28506 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 00:08:42.247232   28506 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 00:08:42.247245   28506 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 00:08:42.247253   28506 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 00:08:42.247259   28506 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 00:08:42.247262   28506 command_runner.go:130] > # read_only = false
	I1128 00:08:42.247268   28506 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 00:08:42.247276   28506 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 00:08:42.247283   28506 command_runner.go:130] > # live configuration reload.
	I1128 00:08:42.247288   28506 command_runner.go:130] > # log_level = "info"
	I1128 00:08:42.247300   28506 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 00:08:42.247312   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:08:42.247319   28506 command_runner.go:130] > # log_filter = ""
	I1128 00:08:42.247333   28506 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 00:08:42.247345   28506 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 00:08:42.247352   28506 command_runner.go:130] > # separated by comma.
	I1128 00:08:42.247357   28506 command_runner.go:130] > # uid_mappings = ""
	I1128 00:08:42.247363   28506 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 00:08:42.247372   28506 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 00:08:42.247378   28506 command_runner.go:130] > # separated by comma.
	I1128 00:08:42.247383   28506 command_runner.go:130] > # gid_mappings = ""
	I1128 00:08:42.247389   28506 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 00:08:42.247399   28506 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 00:08:42.247413   28506 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 00:08:42.247424   28506 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 00:08:42.247436   28506 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 00:08:42.247446   28506 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 00:08:42.247453   28506 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 00:08:42.247459   28506 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 00:08:42.247465   28506 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 00:08:42.247473   28506 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 00:08:42.247479   28506 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 00:08:42.247486   28506 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 00:08:42.247499   28506 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 00:08:42.247512   28506 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 00:08:42.247521   28506 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 00:08:42.247533   28506 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 00:08:42.247562   28506 command_runner.go:130] > drop_infra_ctr = false
	I1128 00:08:42.247572   28506 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 00:08:42.247578   28506 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 00:08:42.247589   28506 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 00:08:42.247598   28506 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 00:08:42.247609   28506 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 00:08:42.247620   28506 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 00:08:42.247629   28506 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 00:08:42.247643   28506 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 00:08:42.247654   28506 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1128 00:08:42.247666   28506 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 00:08:42.247676   28506 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 00:08:42.247689   28506 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 00:08:42.247699   28506 command_runner.go:130] > # default_runtime = "runc"
	I1128 00:08:42.247710   28506 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 00:08:42.247726   28506 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 00:08:42.247743   28506 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 00:08:42.247758   28506 command_runner.go:130] > # creation as a file is not desired either.
	I1128 00:08:42.247771   28506 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 00:08:42.247781   28506 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 00:08:42.247793   28506 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 00:08:42.247799   28506 command_runner.go:130] > # ]
	I1128 00:08:42.247813   28506 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 00:08:42.247826   28506 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 00:08:42.247840   28506 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 00:08:42.247853   28506 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 00:08:42.247861   28506 command_runner.go:130] > #
	I1128 00:08:42.247866   28506 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 00:08:42.247876   28506 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 00:08:42.247886   28506 command_runner.go:130] > #  runtime_type = "oci"
	I1128 00:08:42.247898   28506 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 00:08:42.247909   28506 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 00:08:42.247919   28506 command_runner.go:130] > #  allowed_annotations = []
	I1128 00:08:42.247925   28506 command_runner.go:130] > # Where:
	I1128 00:08:42.247937   28506 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 00:08:42.247949   28506 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 00:08:42.247956   28506 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 00:08:42.247974   28506 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 00:08:42.247985   28506 command_runner.go:130] > #   in $PATH.
	I1128 00:08:42.247996   28506 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 00:08:42.248008   28506 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 00:08:42.248021   28506 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 00:08:42.248030   28506 command_runner.go:130] > #   state.
	I1128 00:08:42.248042   28506 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 00:08:42.248054   28506 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 00:08:42.248062   28506 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 00:08:42.248074   28506 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 00:08:42.248089   28506 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 00:08:42.248100   28506 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 00:08:42.248112   28506 command_runner.go:130] > #   The currently recognized values are:
	I1128 00:08:42.248126   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 00:08:42.248140   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 00:08:42.248153   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 00:08:42.248163   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 00:08:42.248174   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 00:08:42.248190   28506 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 00:08:42.248204   28506 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 00:08:42.248218   28506 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 00:08:42.248230   28506 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 00:08:42.248240   28506 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 00:08:42.248250   28506 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1128 00:08:42.248255   28506 command_runner.go:130] > runtime_type = "oci"
	I1128 00:08:42.248259   28506 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 00:08:42.248269   28506 command_runner.go:130] > runtime_config_path = ""
	I1128 00:08:42.248278   28506 command_runner.go:130] > monitor_path = ""
	I1128 00:08:42.248286   28506 command_runner.go:130] > monitor_cgroup = ""
	I1128 00:08:42.248296   28506 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 00:08:42.248311   28506 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 00:08:42.248321   28506 command_runner.go:130] > # running containers
	I1128 00:08:42.248329   28506 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 00:08:42.248342   28506 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 00:08:42.248392   28506 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 00:08:42.248406   28506 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 00:08:42.248414   28506 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 00:08:42.248425   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 00:08:42.248436   28506 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 00:08:42.248447   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 00:08:42.248458   28506 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 00:08:42.248468   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 00:08:42.248476   28506 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 00:08:42.248487   28506 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 00:08:42.248501   28506 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 00:08:42.248516   28506 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 00:08:42.248532   28506 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 00:08:42.248544   28506 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 00:08:42.248560   28506 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 00:08:42.248572   28506 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 00:08:42.248586   28506 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 00:08:42.248601   28506 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 00:08:42.248610   28506 command_runner.go:130] > # Example:
	I1128 00:08:42.248620   28506 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 00:08:42.248631   28506 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 00:08:42.248642   28506 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 00:08:42.248650   28506 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 00:08:42.248656   28506 command_runner.go:130] > # cpuset = 0
	I1128 00:08:42.248666   28506 command_runner.go:130] > # cpushares = "0-1"
	I1128 00:08:42.248676   28506 command_runner.go:130] > # Where:
	I1128 00:08:42.248684   28506 command_runner.go:130] > # The workload name is workload-type.
	I1128 00:08:42.248700   28506 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 00:08:42.248712   28506 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 00:08:42.248725   28506 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 00:08:42.248740   28506 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 00:08:42.248750   28506 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 00:08:42.248769   28506 command_runner.go:130] > # 
	I1128 00:08:42.248782   28506 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 00:08:42.248791   28506 command_runner.go:130] > #
	I1128 00:08:42.248801   28506 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 00:08:42.248814   28506 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 00:08:42.248828   28506 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 00:08:42.248839   28506 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 00:08:42.248850   28506 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 00:08:42.248860   28506 command_runner.go:130] > [crio.image]
	I1128 00:08:42.248870   28506 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 00:08:42.248881   28506 command_runner.go:130] > # default_transport = "docker://"
	I1128 00:08:42.248895   28506 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 00:08:42.248908   28506 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 00:08:42.248918   28506 command_runner.go:130] > # global_auth_file = ""
	I1128 00:08:42.248927   28506 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 00:08:42.248936   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:08:42.248943   28506 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 00:08:42.248958   28506 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 00:08:42.248974   28506 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 00:08:42.248984   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:08:42.248996   28506 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 00:08:42.249009   28506 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 00:08:42.249023   28506 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 00:08:42.249036   28506 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 00:08:42.249046   28506 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 00:08:42.249051   28506 command_runner.go:130] > # pause_command = "/pause"
	I1128 00:08:42.249065   28506 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 00:08:42.249078   28506 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 00:08:42.249092   28506 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 00:08:42.249102   28506 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 00:08:42.249115   28506 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 00:08:42.249125   28506 command_runner.go:130] > # signature_policy = ""
	I1128 00:08:42.249135   28506 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 00:08:42.249148   28506 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 00:08:42.249158   28506 command_runner.go:130] > # changing them here.
	I1128 00:08:42.249165   28506 command_runner.go:130] > # insecure_registries = [
	I1128 00:08:42.249175   28506 command_runner.go:130] > # ]
	I1128 00:08:42.249191   28506 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 00:08:42.249203   28506 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 00:08:42.249213   28506 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 00:08:42.249222   28506 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 00:08:42.249231   28506 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 00:08:42.249240   28506 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 00:08:42.249250   28506 command_runner.go:130] > # CNI plugins.
	I1128 00:08:42.249257   28506 command_runner.go:130] > [crio.network]
	I1128 00:08:42.249271   28506 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 00:08:42.249283   28506 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 00:08:42.249293   28506 command_runner.go:130] > # cni_default_network = ""
	I1128 00:08:42.249303   28506 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 00:08:42.249313   28506 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 00:08:42.249321   28506 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 00:08:42.249328   28506 command_runner.go:130] > # plugin_dirs = [
	I1128 00:08:42.249338   28506 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 00:08:42.249348   28506 command_runner.go:130] > # ]
	I1128 00:08:42.249358   28506 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 00:08:42.249367   28506 command_runner.go:130] > [crio.metrics]
	I1128 00:08:42.249376   28506 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 00:08:42.249385   28506 command_runner.go:130] > enable_metrics = true
	I1128 00:08:42.249394   28506 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 00:08:42.249403   28506 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 00:08:42.249413   28506 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 00:08:42.249428   28506 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 00:08:42.249441   28506 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 00:08:42.249451   28506 command_runner.go:130] > # metrics_collectors = [
	I1128 00:08:42.249461   28506 command_runner.go:130] > # 	"operations",
	I1128 00:08:42.249474   28506 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 00:08:42.249485   28506 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 00:08:42.249495   28506 command_runner.go:130] > # 	"operations_errors",
	I1128 00:08:42.249504   28506 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 00:08:42.249511   28506 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 00:08:42.249519   28506 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 00:08:42.249529   28506 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 00:08:42.249540   28506 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 00:08:42.249548   28506 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 00:08:42.249558   28506 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 00:08:42.249568   28506 command_runner.go:130] > # 	"containers_oom_total",
	I1128 00:08:42.249578   28506 command_runner.go:130] > # 	"containers_oom",
	I1128 00:08:42.249588   28506 command_runner.go:130] > # 	"processes_defunct",
	I1128 00:08:42.249598   28506 command_runner.go:130] > # 	"operations_total",
	I1128 00:08:42.249607   28506 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 00:08:42.249614   28506 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 00:08:42.249621   28506 command_runner.go:130] > # 	"operations_errors_total",
	I1128 00:08:42.249632   28506 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 00:08:42.249643   28506 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 00:08:42.249655   28506 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 00:08:42.249666   28506 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 00:08:42.249676   28506 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 00:08:42.249687   28506 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 00:08:42.249696   28506 command_runner.go:130] > # ]
	I1128 00:08:42.249707   28506 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 00:08:42.249714   28506 command_runner.go:130] > # metrics_port = 9090
	I1128 00:08:42.249721   28506 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 00:08:42.249732   28506 command_runner.go:130] > # metrics_socket = ""
	I1128 00:08:42.249744   28506 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 00:08:42.249757   28506 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 00:08:42.249770   28506 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 00:08:42.249781   28506 command_runner.go:130] > # certificate on any modification event.
	I1128 00:08:42.249791   28506 command_runner.go:130] > # metrics_cert = ""
	I1128 00:08:42.249803   28506 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 00:08:42.249812   28506 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 00:08:42.249819   28506 command_runner.go:130] > # metrics_key = ""
	I1128 00:08:42.249832   28506 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 00:08:42.249843   28506 command_runner.go:130] > [crio.tracing]
	I1128 00:08:42.249853   28506 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 00:08:42.249863   28506 command_runner.go:130] > # enable_tracing = false
	I1128 00:08:42.249875   28506 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 00:08:42.249885   28506 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 00:08:42.249897   28506 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 00:08:42.249908   28506 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 00:08:42.249917   28506 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 00:08:42.249923   28506 command_runner.go:130] > [crio.stats]
	I1128 00:08:42.249937   28506 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 00:08:42.249949   28506 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 00:08:42.249965   28506 command_runner.go:130] > # stats_collection_period = 0
	I1128 00:08:42.250005   28506 command_runner.go:130] ! time="2023-11-28 00:08:42.231868174Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1128 00:08:42.250026   28506 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 00:08:42.250095   28506 cni.go:84] Creating CNI manager for ""
	I1128 00:08:42.250106   28506 cni.go:136] 3 nodes found, recommending kindnet
	I1128 00:08:42.250118   28506 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:08:42.250145   28506 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-883509 NodeName:multinode-883509-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:08:42.250281   28506 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-883509-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:08:42.250350   28506 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-883509-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:08:42.250406   28506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:08:42.260020   28506 command_runner.go:130] > kubeadm
	I1128 00:08:42.260039   28506 command_runner.go:130] > kubectl
	I1128 00:08:42.260045   28506 command_runner.go:130] > kubelet
	I1128 00:08:42.260066   28506 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:08:42.260123   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1128 00:08:42.267803   28506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1128 00:08:42.283890   28506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:08:42.299606   28506 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1128 00:08:42.302986   28506 command_runner.go:130] > 192.168.39.159	control-plane.minikube.internal
	I1128 00:08:42.303215   28506 host.go:66] Checking if "multinode-883509" exists ...
	I1128 00:08:42.303505   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:08:42.303518   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:08:42.303595   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:08:42.318335   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46507
	I1128 00:08:42.318732   28506 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:08:42.319114   28506 main.go:141] libmachine: Using API Version  1
	I1128 00:08:42.319131   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:08:42.319452   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:08:42.319629   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:08:42.319770   28506 start.go:304] JoinCluster: &{Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:08:42.319926   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1128 00:08:42.319946   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:08:42.322796   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:08:42.323185   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:08:42.323205   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:08:42.323325   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:08:42.323476   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:08:42.323610   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:08:42.323749   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1128 00:08:42.503673   28506 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j6v6pz.o3pbvtrrvnw3gy4s --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:08:42.506056   28506 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 00:08:42.506094   28506 host.go:66] Checking if "multinode-883509" exists ...
	I1128 00:08:42.506381   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:08:42.506425   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:08:42.520555   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I1128 00:08:42.520997   28506 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:08:42.521448   28506 main.go:141] libmachine: Using API Version  1
	I1128 00:08:42.521476   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:08:42.521801   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:08:42.521973   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:08:42.522140   28506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-883509-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1128 00:08:42.522157   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:08:42.524659   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:08:42.525069   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:08:42.525087   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:08:42.525201   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:08:42.525358   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:08:42.525495   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:08:42.525618   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1128 00:08:42.717378   28506 command_runner.go:130] > node/multinode-883509-m02 cordoned
	I1128 00:08:45.757252   28506 command_runner.go:130] > pod "busybox-5bc68d56bd-lgwvm" has DeletionTimestamp older than 1 seconds, skipping
	I1128 00:08:45.757281   28506 command_runner.go:130] > node/multinode-883509-m02 drained
	I1128 00:08:45.758944   28506 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1128 00:08:45.758965   28506 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-t4wlq, kube-system/kube-proxy-fvsj6
	I1128 00:08:45.758984   28506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-883509-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.236825595s)
	I1128 00:08:45.758998   28506 node.go:108] successfully drained node "m02"
	I1128 00:08:45.759404   28506 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:08:45.759650   28506 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:08:45.760033   28506 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1128 00:08:45.760098   28506 round_trippers.go:463] DELETE https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:08:45.760109   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:45.760119   28506 round_trippers.go:473]     Content-Type: application/json
	I1128 00:08:45.760131   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:45.760143   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:45.775638   28506 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1128 00:08:45.775663   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:45.775673   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:45.775681   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:45.775688   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:45.775695   28506 round_trippers.go:580]     Content-Length: 171
	I1128 00:08:45.775702   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:45 GMT
	I1128 00:08:45.775712   28506 round_trippers.go:580]     Audit-Id: f22531f8-99d0-4e18-abbd-d9e01e41680a
	I1128 00:08:45.775722   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:45.775755   28506 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-883509-m02","kind":"nodes","uid":"bab7a2f0-69c5-4ea7-9f9a-3797513ecf61"}}
	I1128 00:08:45.775791   28506 node.go:124] successfully deleted node "m02"
	I1128 00:08:45.775802   28506 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 00:08:45.775824   28506 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 00:08:45.775845   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j6v6pz.o3pbvtrrvnw3gy4s --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-883509-m02"
	I1128 00:08:45.830945   28506 command_runner.go:130] ! W1128 00:08:45.820562    2614 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1128 00:08:45.831306   28506 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1128 00:08:45.982612   28506 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1128 00:08:45.982651   28506 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1128 00:08:46.756729   28506 command_runner.go:130] > [preflight] Running pre-flight checks
	I1128 00:08:46.756764   28506 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1128 00:08:46.756781   28506 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1128 00:08:46.756794   28506 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:08:46.756811   28506 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:08:46.756819   28506 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 00:08:46.756833   28506 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1128 00:08:46.756841   28506 command_runner.go:130] > This node has joined the cluster:
	I1128 00:08:46.756860   28506 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1128 00:08:46.756873   28506 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1128 00:08:46.756886   28506 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1128 00:08:46.757410   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1128 00:08:47.026590   28506 start.go:306] JoinCluster complete in 4.706814806s
	I1128 00:08:47.026627   28506 cni.go:84] Creating CNI manager for ""
	I1128 00:08:47.026634   28506 cni.go:136] 3 nodes found, recommending kindnet
	I1128 00:08:47.026687   28506 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 00:08:47.032958   28506 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 00:08:47.032978   28506 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1128 00:08:47.032994   28506 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1128 00:08:47.033004   28506 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 00:08:47.033014   28506 command_runner.go:130] > Access: 2023-11-28 00:04:46.123158983 +0000
	I1128 00:08:47.033022   28506 command_runner.go:130] > Modify: 2023-11-27 22:54:55.000000000 +0000
	I1128 00:08:47.033031   28506 command_runner.go:130] > Change: 2023-11-28 00:04:44.129158983 +0000
	I1128 00:08:47.033037   28506 command_runner.go:130] >  Birth: -
	I1128 00:08:47.033493   28506 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 00:08:47.033509   28506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 00:08:47.051974   28506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 00:08:47.424529   28506 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1128 00:08:47.424553   28506 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1128 00:08:47.424562   28506 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1128 00:08:47.424569   28506 command_runner.go:130] > daemonset.apps/kindnet configured
	I1128 00:08:47.425100   28506 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:08:47.425417   28506 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:08:47.425690   28506 round_trippers.go:463] GET https://192.168.39.159:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 00:08:47.425702   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.425710   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.425716   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.430274   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:08:47.430292   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.430302   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.430315   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.430326   28506 round_trippers.go:580]     Content-Length: 291
	I1128 00:08:47.430335   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.430340   28506 round_trippers.go:580]     Audit-Id: 4e1297f8-0c9d-4857-b1af-ab1789629c37
	I1128 00:08:47.430347   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.430352   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.430491   28506 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6e7cc9d5-ec42-4b16-9afb-9c3b43521ec6","resourceVersion":"914","creationTimestamp":"2023-11-27T23:54:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1128 00:08:47.430594   28506 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-883509" context rescaled to 1 replicas
	I1128 00:08:47.430627   28506 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 00:08:47.433126   28506 out.go:177] * Verifying Kubernetes components...
	I1128 00:08:47.434456   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:08:47.448372   28506 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:08:47.448683   28506 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:08:47.448933   28506 node_ready.go:35] waiting up to 6m0s for node "multinode-883509-m02" to be "Ready" ...
	I1128 00:08:47.448992   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:08:47.448999   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.449006   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.449012   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.452137   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:08:47.452156   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.452166   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.452174   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.452182   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.452190   28506 round_trippers.go:580]     Audit-Id: 4cd005d6-5d3a-4fae-8009-3f3e3e8d0237
	I1128 00:08:47.452199   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.452208   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.452583   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"d053a818-316c-479c-8722-1b9e01fced24","resourceVersion":"1155","creationTimestamp":"2023-11-28T00:08:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:08:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:08:46Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1128 00:08:47.452850   28506 node_ready.go:49] node "multinode-883509-m02" has status "Ready":"True"
	I1128 00:08:47.452866   28506 node_ready.go:38] duration metric: took 3.916155ms waiting for node "multinode-883509-m02" to be "Ready" ...
	I1128 00:08:47.452876   28506 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:08:47.452939   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:08:47.452950   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.452960   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.452971   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.456635   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:08:47.456654   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.456663   28506 round_trippers.go:580]     Audit-Id: 732273f1-069f-4fe3-8c7c-3bb96a2e4051
	I1128 00:08:47.456671   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.456679   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.456688   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.456708   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.456716   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.458673   28506 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1161"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"910","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82238 chars]
	I1128 00:08:47.461828   28506 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.461903   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:08:47.461914   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.461925   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.461935   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.464252   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:08:47.464270   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.464279   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.464287   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.464295   28506 round_trippers.go:580]     Audit-Id: e2154436-5416-4ce3-bf06-3df77782dd31
	I1128 00:08:47.464306   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.464317   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.464329   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.464676   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"910","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1128 00:08:47.465205   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:08:47.465224   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.465234   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.465243   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.467210   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:08:47.467225   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.467234   28506 round_trippers.go:580]     Audit-Id: 79d980d6-dc84-492b-bfbe-55ef7bf8c547
	I1128 00:08:47.467243   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.467256   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.467268   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.467281   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.467292   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.467597   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:08:47.467855   28506 pod_ready.go:92] pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace has status "Ready":"True"
	I1128 00:08:47.467870   28506 pod_ready.go:81] duration metric: took 6.021366ms waiting for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.467880   28506 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.467924   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1128 00:08:47.467934   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.467944   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.467954   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.470136   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:08:47.470155   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.470164   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.470174   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.470187   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.470195   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.470207   28506 round_trippers.go:580]     Audit-Id: ad8bfdfc-6c30-481a-ae79-b58d695deb30
	I1128 00:08:47.470217   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.470433   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"887","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1128 00:08:47.470716   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:08:47.470730   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.470740   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.470758   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.472699   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:08:47.472717   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.472726   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.472734   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.472743   28506 round_trippers.go:580]     Audit-Id: 02ba553a-ceb2-4bcd-83e6-cef60c6cffa4
	I1128 00:08:47.472751   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.472775   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.472787   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.473338   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:08:47.473586   28506 pod_ready.go:92] pod "etcd-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:08:47.473600   28506 pod_ready.go:81] duration metric: took 5.713162ms waiting for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.473621   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.473677   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-883509
	I1128 00:08:47.473687   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.473697   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.473709   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.475474   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:08:47.475492   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.475502   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.475510   28506 round_trippers.go:580]     Audit-Id: 0debe2e1-31eb-4e9b-b451-d7fdd36b1dc5
	I1128 00:08:47.475519   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.475531   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.475540   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.475551   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.475658   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-883509","namespace":"kube-system","uid":"0a144c07-5db8-418a-ad15-110fabc7f377","resourceVersion":"880","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.159:8443","kubernetes.io/config.hash":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.mirror":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.seen":"2023-11-27T23:54:53.116543447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1128 00:08:47.476091   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:08:47.476109   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.476120   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.476136   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.477858   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:08:47.477873   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.477882   28506 round_trippers.go:580]     Audit-Id: eebde22d-e38e-4f20-b752-aa7ccd2cbdbb
	I1128 00:08:47.477890   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.477899   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.477914   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.477925   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.477937   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.478107   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:08:47.478407   28506 pod_ready.go:92] pod "kube-apiserver-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:08:47.478422   28506 pod_ready.go:81] duration metric: took 4.790968ms waiting for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.478431   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.478483   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-883509
	I1128 00:08:47.478492   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.478498   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.478505   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.481756   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:08:47.481772   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.481781   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.481789   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.481797   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.481806   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.481821   28506 round_trippers.go:580]     Audit-Id: 4cba7aea-e6ff-433e-b343-3dbead057757
	I1128 00:08:47.481831   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.482030   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-883509","namespace":"kube-system","uid":"f8474e48-c333-4772-ae1f-59cdb2bf95eb","resourceVersion":"882","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.mirror":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.seen":"2023-11-27T23:54:53.116544230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1128 00:08:47.482351   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:08:47.482363   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.482373   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.482382   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.486119   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:08:47.486136   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.486144   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.486154   28506 round_trippers.go:580]     Audit-Id: 2cf33a99-486a-45a6-9cae-22736b11c6ab
	I1128 00:08:47.486163   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.486173   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.486186   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.486194   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.486339   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:08:47.486642   28506 pod_ready.go:92] pod "kube-controller-manager-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:08:47.486658   28506 pod_ready.go:81] duration metric: took 8.216169ms waiting for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.486670   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6dvv4" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.650111   28506 request.go:629] Waited for 163.381767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:08:47.650202   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:08:47.650214   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.650224   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.650237   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.652661   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:08:47.652684   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.652693   28506 round_trippers.go:580]     Audit-Id: 472d6aed-4804-4dc9-82fa-965bd7d10919
	I1128 00:08:47.652702   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.652709   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.652718   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.652726   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.652733   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.653006   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6dvv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c6651c7d-33a2-4a46-9d73-e60ee19557fa","resourceVersion":"726","creationTimestamp":"2023-11-27T23:56:37Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:56:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1128 00:08:47.849902   28506 request.go:629] Waited for 196.40052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:08:47.849976   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:08:47.849981   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:47.849988   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:47.849994   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:47.852816   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:08:47.852835   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:47.852842   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:47.852851   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:47.852859   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:47.852868   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:47.852877   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:47 GMT
	I1128 00:08:47.852887   28506 round_trippers.go:580]     Audit-Id: 9f22a81c-4fa7-43e0-a932-0588b1890e5a
	I1128 00:08:47.853129   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m03","uid":"2bc47ce6-2761-4c93-b9f7-cf65c531732f","resourceVersion":"891","creationTimestamp":"2023-11-27T23:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:57:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1128 00:08:47.853422   28506 pod_ready.go:92] pod "kube-proxy-6dvv4" in "kube-system" namespace has status "Ready":"True"
	I1128 00:08:47.853439   28506 pod_ready.go:81] duration metric: took 366.762356ms waiting for pod "kube-proxy-6dvv4" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:47.853449   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:48.049882   28506 request.go:629] Waited for 196.358704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1128 00:08:48.049949   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1128 00:08:48.049958   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:48.049980   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:48.049993   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:48.053186   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:08:48.053204   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:48.053211   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:48.053216   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:48.053222   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:48.053227   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:48 GMT
	I1128 00:08:48.053235   28506 round_trippers.go:580]     Audit-Id: 9d46dac7-6284-483c-b44d-63b98dad5359
	I1128 00:08:48.053243   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:48.053546   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7g246","generateName":"kube-proxy-","namespace":"kube-system","uid":"c03a2053-f013-4269-a5e1-0acfebfc606c","resourceVersion":"810","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1128 00:08:48.249288   28506 request.go:629] Waited for 195.333355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:08:48.249357   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:08:48.249370   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:48.249382   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:48.249395   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:48.251932   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:08:48.251954   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:48.251963   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:48.251977   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:48.251986   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:48 GMT
	I1128 00:08:48.251994   28506 round_trippers.go:580]     Audit-Id: 21821e6f-db5f-4c9e-b42e-770e374d3aff
	I1128 00:08:48.252002   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:48.252014   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:48.252225   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:08:48.252639   28506 pod_ready.go:92] pod "kube-proxy-7g246" in "kube-system" namespace has status "Ready":"True"
	I1128 00:08:48.252657   28506 pod_ready.go:81] duration metric: took 399.201053ms waiting for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:48.252670   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:48.449923   28506 request.go:629] Waited for 197.184377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1128 00:08:48.449997   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1128 00:08:48.450009   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:48.450021   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:48.450034   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:48.454814   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:08:48.454838   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:48.454848   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:48.454856   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:48.454864   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:48.454873   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:48 GMT
	I1128 00:08:48.454885   28506 round_trippers.go:580]     Audit-Id: af6ac8ad-1a51-484c-b3fe-008058e3a686
	I1128 00:08:48.454894   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:48.455035   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvsj6","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0e7a02e-868c-4774-885c-8b5ad728f451","resourceVersion":"1175","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1128 00:08:48.650023   28506 request.go:629] Waited for 194.39072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:08:48.650084   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:08:48.650091   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:48.650102   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:48.650111   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:48.652909   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:08:48.652936   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:48.652948   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:48.652957   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:48.652966   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:48.652976   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:48.652989   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:48 GMT
	I1128 00:08:48.653000   28506 round_trippers.go:580]     Audit-Id: edd21c64-854a-4632-9902-39e964d49978
	I1128 00:08:48.653114   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"d053a818-316c-479c-8722-1b9e01fced24","resourceVersion":"1155","creationTimestamp":"2023-11-28T00:08:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:08:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:08:46Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1128 00:08:48.653467   28506 pod_ready.go:92] pod "kube-proxy-fvsj6" in "kube-system" namespace has status "Ready":"True"
	I1128 00:08:48.653492   28506 pod_ready.go:81] duration metric: took 400.813269ms waiting for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:48.653505   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:48.849943   28506 request.go:629] Waited for 196.365286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1128 00:08:48.850012   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1128 00:08:48.850024   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:48.850035   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:48.850049   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:48.853242   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:08:48.853264   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:48.853274   28506 round_trippers.go:580]     Audit-Id: bc30bfdd-0d62-49a3-a8ab-219416a439aa
	I1128 00:08:48.853282   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:48.853291   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:48.853300   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:48.853311   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:48.853324   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:48 GMT
	I1128 00:08:48.853518   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-883509","namespace":"kube-system","uid":"191f6a8c-7604-4f03-ba5a-d717b27f634b","resourceVersion":"902","creationTimestamp":"2023-11-27T23:54:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.mirror":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.seen":"2023-11-27T23:54:44.598174974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1128 00:08:49.049957   28506 request.go:629] Waited for 196.077551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:08:49.050023   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:08:49.050031   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:49.050043   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:49.050074   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:49.052969   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:08:49.052988   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:49.052995   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:49.053001   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:49.053006   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:49 GMT
	I1128 00:08:49.053011   28506 round_trippers.go:580]     Audit-Id: cc19a815-d66d-413e-8432-75b6fa816578
	I1128 00:08:49.053016   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:49.053021   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:49.053212   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:08:49.053633   28506 pod_ready.go:92] pod "kube-scheduler-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:08:49.053656   28506 pod_ready.go:81] duration metric: took 400.137989ms waiting for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:08:49.053668   28506 pod_ready.go:38] duration metric: took 1.600780218s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:08:49.053679   28506 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:08:49.053721   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:08:49.067376   28506 system_svc.go:56] duration metric: took 13.689468ms WaitForService to wait for kubelet.
	I1128 00:08:49.067400   28506 kubeadm.go:581] duration metric: took 1.636745239s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:08:49.067423   28506 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:08:49.249834   28506 request.go:629] Waited for 182.345625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I1128 00:08:49.249895   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I1128 00:08:49.249900   28506 round_trippers.go:469] Request Headers:
	I1128 00:08:49.249908   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:08:49.249914   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:08:49.253087   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:08:49.253113   28506 round_trippers.go:577] Response Headers:
	I1128 00:08:49.253123   28506 round_trippers.go:580]     Audit-Id: 33aea3f9-342a-4316-9bef-5ee91e00e58c
	I1128 00:08:49.253132   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:08:49.253139   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:08:49.253152   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:08:49.253160   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:08:49.253168   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:08:49 GMT
	I1128 00:08:49.253643   28506 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1177"},"items":[{"metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15105 chars]
	I1128 00:08:49.254358   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:08:49.254378   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:08:49.254390   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:08:49.254396   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:08:49.254405   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:08:49.254411   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:08:49.254420   28506 node_conditions.go:105] duration metric: took 186.991118ms to run NodePressure ...
	I1128 00:08:49.254432   28506 start.go:228] waiting for startup goroutines ...
	I1128 00:08:49.254452   28506 start.go:242] writing updated cluster config ...
	I1128 00:08:49.254982   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:08:49.255097   28506 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1128 00:08:49.257672   28506 out.go:177] * Starting worker node multinode-883509-m03 in cluster multinode-883509
	I1128 00:08:49.258799   28506 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:08:49.258816   28506 cache.go:56] Caching tarball of preloaded images
	I1128 00:08:49.258914   28506 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 00:08:49.258925   28506 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 00:08:49.259002   28506 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/config.json ...
	I1128 00:08:49.259138   28506 start.go:365] acquiring machines lock for multinode-883509-m03: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:08:49.259173   28506 start.go:369] acquired machines lock for "multinode-883509-m03" in 19.203µs
	I1128 00:08:49.259186   28506 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:08:49.259192   28506 fix.go:54] fixHost starting: m03
	I1128 00:08:49.259418   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:08:49.259444   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:08:49.273217   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I1128 00:08:49.273635   28506 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:08:49.274068   28506 main.go:141] libmachine: Using API Version  1
	I1128 00:08:49.274086   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:08:49.274330   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:08:49.274505   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .DriverName
	I1128 00:08:49.274667   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetState
	I1128 00:08:49.276167   28506 fix.go:102] recreateIfNeeded on multinode-883509-m03: state=Running err=<nil>
	W1128 00:08:49.276180   28506 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:08:49.277864   28506 out.go:177] * Updating the running kvm2 "multinode-883509-m03" VM ...
	I1128 00:08:49.279154   28506 machine.go:88] provisioning docker machine ...
	I1128 00:08:49.279169   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .DriverName
	I1128 00:08:49.279377   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetMachineName
	I1128 00:08:49.279521   28506 buildroot.go:166] provisioning hostname "multinode-883509-m03"
	I1128 00:08:49.279540   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetMachineName
	I1128 00:08:49.279657   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	I1128 00:08:49.281658   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.282060   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:08:49.282087   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.282240   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHPort
	I1128 00:08:49.282407   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:08:49.282530   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:08:49.282677   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHUsername
	I1128 00:08:49.282825   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:08:49.283115   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1128 00:08:49.283127   28506 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-883509-m03 && echo "multinode-883509-m03" | sudo tee /etc/hostname
	I1128 00:08:49.423882   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-883509-m03
	
	I1128 00:08:49.423909   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	I1128 00:08:49.426646   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.427066   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:08:49.427096   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.427256   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHPort
	I1128 00:08:49.427437   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:08:49.427584   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:08:49.427787   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHUsername
	I1128 00:08:49.427968   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:08:49.428320   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1128 00:08:49.428338   28506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-883509-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-883509-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-883509-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:08:49.553620   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:08:49.553649   28506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:08:49.553671   28506 buildroot.go:174] setting up certificates
	I1128 00:08:49.553679   28506 provision.go:83] configureAuth start
	I1128 00:08:49.553687   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetMachineName
	I1128 00:08:49.553992   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetIP
	I1128 00:08:49.556620   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.557022   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:08:49.557050   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.557221   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	I1128 00:08:49.559990   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.560398   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:08:49.560426   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.560605   28506 provision.go:138] copyHostCerts
	I1128 00:08:49.560643   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:08:49.560681   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:08:49.560699   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:08:49.560803   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:08:49.560893   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:08:49.560917   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:08:49.560924   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:08:49.560960   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:08:49.561019   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:08:49.561040   28506 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:08:49.561049   28506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:08:49.561080   28506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:08:49.561140   28506 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.multinode-883509-m03 san=[192.168.39.128 192.168.39.128 localhost 127.0.0.1 minikube multinode-883509-m03]
	I1128 00:08:49.804531   28506 provision.go:172] copyRemoteCerts
	I1128 00:08:49.804607   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:08:49.804637   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	I1128 00:08:49.807202   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.807507   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:08:49.807541   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.807707   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHPort
	I1128 00:08:49.807907   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:08:49.808066   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHUsername
	I1128 00:08:49.808171   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m03/id_rsa Username:docker}
	I1128 00:08:49.905890   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 00:08:49.905961   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:08:49.929455   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 00:08:49.929515   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1128 00:08:49.952169   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 00:08:49.952233   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:08:49.975292   28506 provision.go:86] duration metric: configureAuth took 421.601909ms
	I1128 00:08:49.975320   28506 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:08:49.975560   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:08:49.975648   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	I1128 00:08:49.978249   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.978624   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:08:49.978653   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:08:49.978808   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHPort
	I1128 00:08:49.978999   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:08:49.979177   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:08:49.979351   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHUsername
	I1128 00:08:49.979528   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:08:49.979891   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1128 00:08:49.979907   28506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:10:20.673791   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:10:20.673823   28506 machine.go:91] provisioned docker machine in 1m31.394658221s
	I1128 00:10:20.673833   28506 start.go:300] post-start starting for "multinode-883509-m03" (driver="kvm2")
	I1128 00:10:20.673881   28506 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:10:20.673905   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .DriverName
	I1128 00:10:20.674318   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:10:20.674353   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	I1128 00:10:20.677426   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.677883   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:10:20.677917   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.678126   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHPort
	I1128 00:10:20.678308   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:10:20.678436   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHUsername
	I1128 00:10:20.678598   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m03/id_rsa Username:docker}
	I1128 00:10:20.780204   28506 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:10:20.784475   28506 command_runner.go:130] > NAME=Buildroot
	I1128 00:10:20.784497   28506 command_runner.go:130] > VERSION=2021.02.12-1-g8be4f20-dirty
	I1128 00:10:20.784502   28506 command_runner.go:130] > ID=buildroot
	I1128 00:10:20.784508   28506 command_runner.go:130] > VERSION_ID=2021.02.12
	I1128 00:10:20.784512   28506 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1128 00:10:20.784661   28506 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:10:20.784681   28506 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:10:20.784775   28506 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:10:20.784877   28506 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:10:20.784891   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /etc/ssl/certs/119302.pem
	I1128 00:10:20.784994   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:10:20.794643   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:10:20.820724   28506 start.go:303] post-start completed in 146.879143ms
	I1128 00:10:20.820772   28506 fix.go:56] fixHost completed within 1m31.561557342s
	I1128 00:10:20.820799   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	I1128 00:10:20.823741   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.824130   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:10:20.824159   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.824346   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHPort
	I1128 00:10:20.824593   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:10:20.824795   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:10:20.824933   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHUsername
	I1128 00:10:20.825098   28506 main.go:141] libmachine: Using SSH client type: native
	I1128 00:10:20.825447   28506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I1128 00:10:20.825463   28506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:10:20.949790   28506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701130220.940186584
	
	I1128 00:10:20.949815   28506 fix.go:206] guest clock: 1701130220.940186584
	I1128 00:10:20.949823   28506 fix.go:219] Guest: 2023-11-28 00:10:20.940186584 +0000 UTC Remote: 2023-11-28 00:10:20.82077832 +0000 UTC m=+645.659986041 (delta=119.408264ms)
	I1128 00:10:20.949837   28506 fix.go:190] guest clock delta is within tolerance: 119.408264ms
	I1128 00:10:20.949842   28506 start.go:83] releasing machines lock for "multinode-883509-m03", held for 1m31.690660855s
	I1128 00:10:20.949861   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .DriverName
	I1128 00:10:20.950086   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetIP
	I1128 00:10:20.952700   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.953098   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:10:20.953120   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.955344   28506 out.go:177] * Found network options:
	I1128 00:10:20.956776   28506 out.go:177]   - NO_PROXY=192.168.39.159,192.168.39.97
	W1128 00:10:20.958123   28506 proxy.go:119] fail to check proxy env: Error ip not in block
	W1128 00:10:20.958145   28506 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 00:10:20.958159   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .DriverName
	I1128 00:10:20.958642   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .DriverName
	I1128 00:10:20.958784   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .DriverName
	I1128 00:10:20.958854   28506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:10:20.958881   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	W1128 00:10:20.958923   28506 proxy.go:119] fail to check proxy env: Error ip not in block
	W1128 00:10:20.958945   28506 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 00:10:20.959007   28506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:10:20.959023   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHHostname
	I1128 00:10:20.961407   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.961702   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.961736   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:10:20.961754   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.961868   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHPort
	I1128 00:10:20.962072   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:10:20.962218   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:10:20.962228   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHUsername
	I1128 00:10:20.962254   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:20.962368   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHPort
	I1128 00:10:20.962445   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m03/id_rsa Username:docker}
	I1128 00:10:20.962544   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHKeyPath
	I1128 00:10:20.962685   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetSSHUsername
	I1128 00:10:20.962819   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m03/id_rsa Username:docker}
	I1128 00:10:21.071872   28506 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 00:10:21.199265   28506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 00:10:21.205138   28506 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1128 00:10:21.205219   28506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:10:21.205274   28506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:10:21.215545   28506 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 00:10:21.215567   28506 start.go:472] detecting cgroup driver to use...
	I1128 00:10:21.215617   28506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:10:21.230908   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:10:21.243407   28506 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:10:21.243482   28506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:10:21.259112   28506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:10:21.277138   28506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:10:21.415506   28506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:10:21.548045   28506 docker.go:219] disabling docker service ...
	I1128 00:10:21.548110   28506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:10:21.562066   28506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:10:21.575350   28506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:10:21.709559   28506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:10:21.837774   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:10:21.851198   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:10:21.868740   28506 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 00:10:21.868827   28506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:10:21.868888   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:10:21.879192   28506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:10:21.879262   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:10:21.889795   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:10:21.899813   28506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:10:21.909754   28506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:10:21.921908   28506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:10:21.934179   28506 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1128 00:10:21.934235   28506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:10:21.944613   28506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:10:22.078356   28506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:10:23.000652   28506 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:10:23.000725   28506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:10:23.006624   28506 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 00:10:23.006652   28506 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 00:10:23.006662   28506 command_runner.go:130] > Device: 16h/22d	Inode: 1219        Links: 1
	I1128 00:10:23.006672   28506 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 00:10:23.006681   28506 command_runner.go:130] > Access: 2023-11-28 00:10:22.925863699 +0000
	I1128 00:10:23.006690   28506 command_runner.go:130] > Modify: 2023-11-28 00:10:22.925863699 +0000
	I1128 00:10:23.006705   28506 command_runner.go:130] > Change: 2023-11-28 00:10:22.925863699 +0000
	I1128 00:10:23.006716   28506 command_runner.go:130] >  Birth: -
	I1128 00:10:23.006738   28506 start.go:540] Will wait 60s for crictl version
	I1128 00:10:23.006786   28506 ssh_runner.go:195] Run: which crictl
	I1128 00:10:23.010750   28506 command_runner.go:130] > /usr/bin/crictl
	I1128 00:10:23.010804   28506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:10:23.055578   28506 command_runner.go:130] > Version:  0.1.0
	I1128 00:10:23.055599   28506 command_runner.go:130] > RuntimeName:  cri-o
	I1128 00:10:23.055603   28506 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1128 00:10:23.055608   28506 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 00:10:23.056916   28506 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:10:23.056995   28506 ssh_runner.go:195] Run: crio --version
	I1128 00:10:23.109270   28506 command_runner.go:130] > crio version 1.24.1
	I1128 00:10:23.109296   28506 command_runner.go:130] > Version:          1.24.1
	I1128 00:10:23.109303   28506 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 00:10:23.109308   28506 command_runner.go:130] > GitTreeState:     dirty
	I1128 00:10:23.109318   28506 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1128 00:10:23.109323   28506 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 00:10:23.109331   28506 command_runner.go:130] > Compiler:         gc
	I1128 00:10:23.109335   28506 command_runner.go:130] > Platform:         linux/amd64
	I1128 00:10:23.109340   28506 command_runner.go:130] > Linkmode:         dynamic
	I1128 00:10:23.109352   28506 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 00:10:23.109362   28506 command_runner.go:130] > SeccompEnabled:   true
	I1128 00:10:23.109370   28506 command_runner.go:130] > AppArmorEnabled:  false
	I1128 00:10:23.110713   28506 ssh_runner.go:195] Run: crio --version
	I1128 00:10:23.159581   28506 command_runner.go:130] > crio version 1.24.1
	I1128 00:10:23.159607   28506 command_runner.go:130] > Version:          1.24.1
	I1128 00:10:23.159614   28506 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 00:10:23.159619   28506 command_runner.go:130] > GitTreeState:     dirty
	I1128 00:10:23.159625   28506 command_runner.go:130] > BuildDate:        2023-11-27T22:40:48Z
	I1128 00:10:23.159630   28506 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 00:10:23.159636   28506 command_runner.go:130] > Compiler:         gc
	I1128 00:10:23.159643   28506 command_runner.go:130] > Platform:         linux/amd64
	I1128 00:10:23.159651   28506 command_runner.go:130] > Linkmode:         dynamic
	I1128 00:10:23.159662   28506 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 00:10:23.159673   28506 command_runner.go:130] > SeccompEnabled:   true
	I1128 00:10:23.159684   28506 command_runner.go:130] > AppArmorEnabled:  false
	I1128 00:10:23.162928   28506 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:10:23.164228   28506 out.go:177]   - env NO_PROXY=192.168.39.159
	I1128 00:10:23.165542   28506 out.go:177]   - env NO_PROXY=192.168.39.159,192.168.39.97
	I1128 00:10:23.166948   28506 main.go:141] libmachine: (multinode-883509-m03) Calling .GetIP
	I1128 00:10:23.169836   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:23.170308   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:98:58", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:56:23 +0000 UTC Type:0 Mac:52:54:00:28:98:58 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-883509-m03 Clientid:01:52:54:00:28:98:58}
	I1128 00:10:23.170351   28506 main.go:141] libmachine: (multinode-883509-m03) DBG | domain multinode-883509-m03 has defined IP address 192.168.39.128 and MAC address 52:54:00:28:98:58 in network mk-multinode-883509
	I1128 00:10:23.170535   28506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:10:23.175215   28506 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1128 00:10:23.175258   28506 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509 for IP: 192.168.39.128
	I1128 00:10:23.175272   28506 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:10:23.175386   28506 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:10:23.175429   28506 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:10:23.175446   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 00:10:23.175465   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 00:10:23.175477   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 00:10:23.175490   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 00:10:23.175538   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:10:23.175567   28506 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:10:23.175583   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:10:23.175607   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:10:23.175628   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:10:23.175652   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:10:23.175689   28506 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:10:23.175712   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:10:23.175724   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem -> /usr/share/ca-certificates/11930.pem
	I1128 00:10:23.175737   28506 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> /usr/share/ca-certificates/119302.pem
	I1128 00:10:23.176122   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:10:23.202553   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:10:23.226848   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:10:23.251029   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:10:23.275044   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:10:23.298529   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:10:23.321939   28506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:10:23.345767   28506 ssh_runner.go:195] Run: openssl version
	I1128 00:10:23.351398   28506 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1128 00:10:23.351649   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:10:23.363111   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:10:23.368092   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:10:23.368119   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:10:23.368156   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:10:23.374518   28506 command_runner.go:130] > 51391683
	I1128 00:10:23.374572   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:10:23.384187   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:10:23.395495   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:10:23.401104   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:10:23.401132   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:10:23.401173   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:10:23.406994   28506 command_runner.go:130] > 3ec20f2e
	I1128 00:10:23.407068   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:10:23.416336   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:10:23.427187   28506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:10:23.432065   28506 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:10:23.432093   28506 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:10:23.432138   28506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:10:23.437724   28506 command_runner.go:130] > b5213941
	I1128 00:10:23.437783   28506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:10:23.448902   28506 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:10:23.453510   28506 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 00:10:23.453540   28506 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 00:10:23.453628   28506 ssh_runner.go:195] Run: crio config
	I1128 00:10:23.512895   28506 command_runner.go:130] ! time="2023-11-28 00:10:23.503588837Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1128 00:10:23.513067   28506 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 00:10:23.526102   28506 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 00:10:23.526124   28506 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 00:10:23.526131   28506 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 00:10:23.526135   28506 command_runner.go:130] > #
	I1128 00:10:23.526145   28506 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 00:10:23.526155   28506 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 00:10:23.526166   28506 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 00:10:23.526184   28506 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 00:10:23.526194   28506 command_runner.go:130] > # reload'.
	I1128 00:10:23.526203   28506 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 00:10:23.526215   28506 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 00:10:23.526229   28506 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 00:10:23.526242   28506 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 00:10:23.526252   28506 command_runner.go:130] > [crio]
	I1128 00:10:23.526267   28506 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 00:10:23.526279   28506 command_runner.go:130] > # containers images, in this directory.
	I1128 00:10:23.526288   28506 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1128 00:10:23.526300   28506 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 00:10:23.526307   28506 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1128 00:10:23.526314   28506 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 00:10:23.526322   28506 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 00:10:23.526326   28506 command_runner.go:130] > storage_driver = "overlay"
	I1128 00:10:23.526337   28506 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 00:10:23.526351   28506 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 00:10:23.526361   28506 command_runner.go:130] > storage_option = [
	I1128 00:10:23.526369   28506 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1128 00:10:23.526379   28506 command_runner.go:130] > ]
	I1128 00:10:23.526393   28506 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 00:10:23.526406   28506 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 00:10:23.526416   28506 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 00:10:23.526424   28506 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 00:10:23.526431   28506 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 00:10:23.526443   28506 command_runner.go:130] > # always happen on a node reboot
	I1128 00:10:23.526455   28506 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 00:10:23.526468   28506 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 00:10:23.526481   28506 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 00:10:23.526499   28506 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 00:10:23.526511   28506 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 00:10:23.526526   28506 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 00:10:23.526538   28506 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 00:10:23.526547   28506 command_runner.go:130] > # internal_wipe = true
	I1128 00:10:23.526560   28506 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 00:10:23.526574   28506 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 00:10:23.526594   28506 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 00:10:23.526606   28506 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 00:10:23.526619   28506 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 00:10:23.526628   28506 command_runner.go:130] > [crio.api]
	I1128 00:10:23.526636   28506 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 00:10:23.526646   28506 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 00:10:23.526655   28506 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 00:10:23.526667   28506 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 00:10:23.526679   28506 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 00:10:23.526691   28506 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 00:10:23.526700   28506 command_runner.go:130] > # stream_port = "0"
	I1128 00:10:23.526713   28506 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 00:10:23.526721   28506 command_runner.go:130] > # stream_enable_tls = false
	I1128 00:10:23.526733   28506 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 00:10:23.526741   28506 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 00:10:23.526750   28506 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 00:10:23.526764   28506 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 00:10:23.526774   28506 command_runner.go:130] > # minutes.
	I1128 00:10:23.526784   28506 command_runner.go:130] > # stream_tls_cert = ""
	I1128 00:10:23.526797   28506 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 00:10:23.526810   28506 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 00:10:23.526819   28506 command_runner.go:130] > # stream_tls_key = ""
	I1128 00:10:23.526828   28506 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 00:10:23.526841   28506 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 00:10:23.526855   28506 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 00:10:23.526866   28506 command_runner.go:130] > # stream_tls_ca = ""
	I1128 00:10:23.526882   28506 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 00:10:23.526893   28506 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1128 00:10:23.526907   28506 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 00:10:23.526917   28506 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1128 00:10:23.526938   28506 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 00:10:23.526951   28506 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 00:10:23.526962   28506 command_runner.go:130] > [crio.runtime]
	I1128 00:10:23.526975   28506 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 00:10:23.526986   28506 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 00:10:23.526996   28506 command_runner.go:130] > # "nofile=1024:2048"
	I1128 00:10:23.527010   28506 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 00:10:23.527017   28506 command_runner.go:130] > # default_ulimits = [
	I1128 00:10:23.527021   28506 command_runner.go:130] > # ]
	I1128 00:10:23.527034   28506 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 00:10:23.527045   28506 command_runner.go:130] > # no_pivot = false
	I1128 00:10:23.527059   28506 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 00:10:23.527072   28506 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 00:10:23.527084   28506 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 00:10:23.527097   28506 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 00:10:23.527108   28506 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 00:10:23.527118   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 00:10:23.527128   28506 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1128 00:10:23.527139   28506 command_runner.go:130] > # Cgroup setting for conmon
	I1128 00:10:23.527151   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 00:10:23.527161   28506 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 00:10:23.527174   28506 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 00:10:23.527186   28506 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 00:10:23.527200   28506 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 00:10:23.527209   28506 command_runner.go:130] > conmon_env = [
	I1128 00:10:23.527220   28506 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1128 00:10:23.527229   28506 command_runner.go:130] > ]
	I1128 00:10:23.527242   28506 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 00:10:23.527254   28506 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 00:10:23.527266   28506 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 00:10:23.527277   28506 command_runner.go:130] > # default_env = [
	I1128 00:10:23.527287   28506 command_runner.go:130] > # ]
	I1128 00:10:23.527300   28506 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 00:10:23.527308   28506 command_runner.go:130] > # selinux = false
	I1128 00:10:23.527319   28506 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 00:10:23.527333   28506 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 00:10:23.527346   28506 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 00:10:23.527356   28506 command_runner.go:130] > # seccomp_profile = ""
	I1128 00:10:23.527369   28506 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 00:10:23.527383   28506 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 00:10:23.527396   28506 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 00:10:23.527407   28506 command_runner.go:130] > # which might increase security.
	I1128 00:10:23.527415   28506 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1128 00:10:23.527427   28506 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 00:10:23.527441   28506 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 00:10:23.527455   28506 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 00:10:23.527469   28506 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 00:10:23.527480   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:10:23.527491   28506 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 00:10:23.527503   28506 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 00:10:23.527510   28506 command_runner.go:130] > # the cgroup blockio controller.
	I1128 00:10:23.527517   28506 command_runner.go:130] > # blockio_config_file = ""
	I1128 00:10:23.527530   28506 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 00:10:23.527541   28506 command_runner.go:130] > # irqbalance daemon.
	I1128 00:10:23.527550   28506 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 00:10:23.527564   28506 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 00:10:23.527580   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:10:23.527590   28506 command_runner.go:130] > # rdt_config_file = ""
	I1128 00:10:23.527599   28506 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 00:10:23.527608   28506 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 00:10:23.527618   28506 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 00:10:23.527628   28506 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 00:10:23.527642   28506 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 00:10:23.527656   28506 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 00:10:23.527666   28506 command_runner.go:130] > # will be added.
	I1128 00:10:23.527676   28506 command_runner.go:130] > # default_capabilities = [
	I1128 00:10:23.527686   28506 command_runner.go:130] > # 	"CHOWN",
	I1128 00:10:23.527697   28506 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 00:10:23.527705   28506 command_runner.go:130] > # 	"FSETID",
	I1128 00:10:23.527712   28506 command_runner.go:130] > # 	"FOWNER",
	I1128 00:10:23.527717   28506 command_runner.go:130] > # 	"SETGID",
	I1128 00:10:23.527727   28506 command_runner.go:130] > # 	"SETUID",
	I1128 00:10:23.527737   28506 command_runner.go:130] > # 	"SETPCAP",
	I1128 00:10:23.527745   28506 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 00:10:23.527755   28506 command_runner.go:130] > # 	"KILL",
	I1128 00:10:23.527763   28506 command_runner.go:130] > # ]
	I1128 00:10:23.527776   28506 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 00:10:23.527790   28506 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 00:10:23.527799   28506 command_runner.go:130] > # default_sysctls = [
	I1128 00:10:23.527806   28506 command_runner.go:130] > # ]
	I1128 00:10:23.527811   28506 command_runner.go:130] > # List of devices on the host that a
	I1128 00:10:23.527824   28506 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 00:10:23.527835   28506 command_runner.go:130] > # allowed_devices = [
	I1128 00:10:23.527842   28506 command_runner.go:130] > # 	"/dev/fuse",
	I1128 00:10:23.527851   28506 command_runner.go:130] > # ]
	I1128 00:10:23.527862   28506 command_runner.go:130] > # List of additional devices. specified as
	I1128 00:10:23.527877   28506 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 00:10:23.527889   28506 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 00:10:23.527912   28506 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 00:10:23.527923   28506 command_runner.go:130] > # additional_devices = [
	I1128 00:10:23.527932   28506 command_runner.go:130] > # ]
	I1128 00:10:23.527941   28506 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 00:10:23.527951   28506 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 00:10:23.527961   28506 command_runner.go:130] > # 	"/etc/cdi",
	I1128 00:10:23.527971   28506 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 00:10:23.527981   28506 command_runner.go:130] > # ]
	I1128 00:10:23.527995   28506 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 00:10:23.528004   28506 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 00:10:23.528013   28506 command_runner.go:130] > # Defaults to false.
	I1128 00:10:23.528025   28506 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 00:10:23.528039   28506 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 00:10:23.528052   28506 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 00:10:23.528062   28506 command_runner.go:130] > # hooks_dir = [
	I1128 00:10:23.528075   28506 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 00:10:23.528084   28506 command_runner.go:130] > # ]
	I1128 00:10:23.528097   28506 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 00:10:23.528107   28506 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 00:10:23.528118   28506 command_runner.go:130] > # its default mounts from the following two files:
	I1128 00:10:23.528127   28506 command_runner.go:130] > #
	I1128 00:10:23.528141   28506 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 00:10:23.528155   28506 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 00:10:23.528167   28506 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 00:10:23.528176   28506 command_runner.go:130] > #
	I1128 00:10:23.528188   28506 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 00:10:23.528197   28506 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 00:10:23.528210   28506 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 00:10:23.528222   28506 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 00:10:23.528228   28506 command_runner.go:130] > #
	I1128 00:10:23.528239   28506 command_runner.go:130] > # default_mounts_file = ""
	I1128 00:10:23.528252   28506 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 00:10:23.528265   28506 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 00:10:23.528275   28506 command_runner.go:130] > pids_limit = 1024
	I1128 00:10:23.528285   28506 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 00:10:23.528294   28506 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 00:10:23.528303   28506 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 00:10:23.528319   28506 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 00:10:23.528330   28506 command_runner.go:130] > # log_size_max = -1
	I1128 00:10:23.528356   28506 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 00:10:23.528367   28506 command_runner.go:130] > # log_to_journald = false
	I1128 00:10:23.528375   28506 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 00:10:23.528383   28506 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 00:10:23.528392   28506 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 00:10:23.528403   28506 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 00:10:23.528413   28506 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 00:10:23.528424   28506 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 00:10:23.528433   28506 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 00:10:23.528443   28506 command_runner.go:130] > # read_only = false
	I1128 00:10:23.528453   28506 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 00:10:23.528466   28506 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 00:10:23.528474   28506 command_runner.go:130] > # live configuration reload.
	I1128 00:10:23.528480   28506 command_runner.go:130] > # log_level = "info"
	I1128 00:10:23.528493   28506 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 00:10:23.528505   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:10:23.528515   28506 command_runner.go:130] > # log_filter = ""
	I1128 00:10:23.528525   28506 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 00:10:23.528538   28506 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 00:10:23.528548   28506 command_runner.go:130] > # separated by comma.
	I1128 00:10:23.528556   28506 command_runner.go:130] > # uid_mappings = ""
	I1128 00:10:23.528563   28506 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 00:10:23.528582   28506 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 00:10:23.528592   28506 command_runner.go:130] > # separated by comma.
	I1128 00:10:23.528600   28506 command_runner.go:130] > # gid_mappings = ""
	I1128 00:10:23.528613   28506 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 00:10:23.528626   28506 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 00:10:23.528638   28506 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 00:10:23.528648   28506 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 00:10:23.528661   28506 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 00:10:23.528670   28506 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 00:10:23.528684   28506 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 00:10:23.528695   28506 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 00:10:23.528709   28506 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 00:10:23.528723   28506 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 00:10:23.528735   28506 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 00:10:23.528745   28506 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 00:10:23.528767   28506 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 00:10:23.528781   28506 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 00:10:23.528792   28506 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 00:10:23.528804   28506 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 00:10:23.528816   28506 command_runner.go:130] > drop_infra_ctr = false
	I1128 00:10:23.528826   28506 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 00:10:23.528836   28506 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 00:10:23.528852   28506 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 00:10:23.528862   28506 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 00:10:23.528873   28506 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 00:10:23.528885   28506 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 00:10:23.528896   28506 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 00:10:23.528910   28506 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 00:10:23.528921   28506 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1128 00:10:23.528931   28506 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 00:10:23.528942   28506 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 00:10:23.528956   28506 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 00:10:23.528967   28506 command_runner.go:130] > # default_runtime = "runc"
	I1128 00:10:23.528976   28506 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 00:10:23.528992   28506 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 00:10:23.529008   28506 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 00:10:23.529020   28506 command_runner.go:130] > # creation as a file is not desired either.
	I1128 00:10:23.529034   28506 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 00:10:23.529042   28506 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 00:10:23.529050   28506 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 00:10:23.529060   28506 command_runner.go:130] > # ]
	I1128 00:10:23.529071   28506 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 00:10:23.529085   28506 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 00:10:23.529099   28506 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 00:10:23.529112   28506 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 00:10:23.529120   28506 command_runner.go:130] > #
	I1128 00:10:23.529128   28506 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 00:10:23.529136   28506 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 00:10:23.529143   28506 command_runner.go:130] > #  runtime_type = "oci"
	I1128 00:10:23.529155   28506 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 00:10:23.529166   28506 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 00:10:23.529176   28506 command_runner.go:130] > #  allowed_annotations = []
	I1128 00:10:23.529184   28506 command_runner.go:130] > # Where:
	I1128 00:10:23.529196   28506 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 00:10:23.529209   28506 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 00:10:23.529219   28506 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 00:10:23.529231   28506 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 00:10:23.529241   28506 command_runner.go:130] > #   in $PATH.
	I1128 00:10:23.529255   28506 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 00:10:23.529266   28506 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 00:10:23.529279   28506 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 00:10:23.529288   28506 command_runner.go:130] > #   state.
	I1128 00:10:23.529302   28506 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 00:10:23.529312   28506 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 00:10:23.529325   28506 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 00:10:23.529338   28506 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 00:10:23.529352   28506 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 00:10:23.529366   28506 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 00:10:23.529378   28506 command_runner.go:130] > #   The currently recognized values are:
	I1128 00:10:23.529391   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 00:10:23.529404   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 00:10:23.529414   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 00:10:23.529428   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 00:10:23.529444   28506 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 00:10:23.529459   28506 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 00:10:23.529472   28506 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 00:10:23.529486   28506 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 00:10:23.529497   28506 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 00:10:23.529506   28506 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 00:10:23.529514   28506 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1128 00:10:23.529524   28506 command_runner.go:130] > runtime_type = "oci"
	I1128 00:10:23.529535   28506 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 00:10:23.529544   28506 command_runner.go:130] > runtime_config_path = ""
	I1128 00:10:23.529554   28506 command_runner.go:130] > monitor_path = ""
	I1128 00:10:23.529564   28506 command_runner.go:130] > monitor_cgroup = ""
	I1128 00:10:23.529574   28506 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 00:10:23.529591   28506 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 00:10:23.529600   28506 command_runner.go:130] > # running containers
	I1128 00:10:23.529607   28506 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 00:10:23.529616   28506 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 00:10:23.529652   28506 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 00:10:23.529665   28506 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 00:10:23.529674   28506 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 00:10:23.529684   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 00:10:23.529692   28506 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 00:10:23.529699   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 00:10:23.529710   28506 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 00:10:23.529721   28506 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 00:10:23.529738   28506 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 00:10:23.529750   28506 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 00:10:23.529763   28506 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 00:10:23.529778   28506 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 00:10:23.529791   28506 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 00:10:23.529802   28506 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 00:10:23.529820   28506 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 00:10:23.529836   28506 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 00:10:23.529849   28506 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 00:10:23.529864   28506 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 00:10:23.529872   28506 command_runner.go:130] > # Example:
	I1128 00:10:23.529878   28506 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 00:10:23.529887   28506 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 00:10:23.529896   28506 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 00:10:23.529909   28506 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 00:10:23.529916   28506 command_runner.go:130] > # cpuset = 0
	I1128 00:10:23.529926   28506 command_runner.go:130] > # cpushares = "0-1"
	I1128 00:10:23.529934   28506 command_runner.go:130] > # Where:
	I1128 00:10:23.529944   28506 command_runner.go:130] > # The workload name is workload-type.
	I1128 00:10:23.529959   28506 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 00:10:23.529971   28506 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 00:10:23.529981   28506 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 00:10:23.529992   28506 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 00:10:23.530005   28506 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 00:10:23.530011   28506 command_runner.go:130] > # 
	I1128 00:10:23.530023   28506 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 00:10:23.530032   28506 command_runner.go:130] > #
	I1128 00:10:23.530045   28506 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 00:10:23.530058   28506 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 00:10:23.530071   28506 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 00:10:23.530080   28506 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 00:10:23.530093   28506 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 00:10:23.530103   28506 command_runner.go:130] > [crio.image]
	I1128 00:10:23.530114   28506 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 00:10:23.530125   28506 command_runner.go:130] > # default_transport = "docker://"
	I1128 00:10:23.530138   28506 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 00:10:23.530153   28506 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 00:10:23.530163   28506 command_runner.go:130] > # global_auth_file = ""
	I1128 00:10:23.530172   28506 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 00:10:23.530181   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:10:23.530191   28506 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 00:10:23.530206   28506 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 00:10:23.530219   28506 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 00:10:23.530231   28506 command_runner.go:130] > # This option supports live configuration reload.
	I1128 00:10:23.530241   28506 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 00:10:23.530254   28506 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 00:10:23.530266   28506 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 00:10:23.530276   28506 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 00:10:23.530288   28506 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 00:10:23.530299   28506 command_runner.go:130] > # pause_command = "/pause"
	I1128 00:10:23.530311   28506 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 00:10:23.530324   28506 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 00:10:23.530337   28506 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 00:10:23.530350   28506 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 00:10:23.530362   28506 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 00:10:23.530369   28506 command_runner.go:130] > # signature_policy = ""
	I1128 00:10:23.530377   28506 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 00:10:23.530391   28506 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 00:10:23.530402   28506 command_runner.go:130] > # changing them here.
	I1128 00:10:23.530409   28506 command_runner.go:130] > # insecure_registries = [
	I1128 00:10:23.530418   28506 command_runner.go:130] > # ]
	I1128 00:10:23.530434   28506 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 00:10:23.530446   28506 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 00:10:23.530456   28506 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 00:10:23.530467   28506 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 00:10:23.530474   28506 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 00:10:23.530484   28506 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 00:10:23.530493   28506 command_runner.go:130] > # CNI plugins.
	I1128 00:10:23.530504   28506 command_runner.go:130] > [crio.network]
	I1128 00:10:23.530516   28506 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 00:10:23.530529   28506 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 00:10:23.530539   28506 command_runner.go:130] > # cni_default_network = ""
	I1128 00:10:23.530552   28506 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 00:10:23.530559   28506 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 00:10:23.530566   28506 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 00:10:23.530580   28506 command_runner.go:130] > # plugin_dirs = [
	I1128 00:10:23.530591   28506 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 00:10:23.530597   28506 command_runner.go:130] > # ]
	I1128 00:10:23.530611   28506 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 00:10:23.530620   28506 command_runner.go:130] > [crio.metrics]
	I1128 00:10:23.530631   28506 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 00:10:23.530640   28506 command_runner.go:130] > enable_metrics = true
	I1128 00:10:23.530651   28506 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 00:10:23.530661   28506 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 00:10:23.530671   28506 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 00:10:23.530681   28506 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 00:10:23.530697   28506 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 00:10:23.530708   28506 command_runner.go:130] > # metrics_collectors = [
	I1128 00:10:23.530718   28506 command_runner.go:130] > # 	"operations",
	I1128 00:10:23.530729   28506 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 00:10:23.530737   28506 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 00:10:23.530748   28506 command_runner.go:130] > # 	"operations_errors",
	I1128 00:10:23.530757   28506 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 00:10:23.530765   28506 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 00:10:23.530770   28506 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 00:10:23.530779   28506 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 00:10:23.530788   28506 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 00:10:23.530799   28506 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 00:10:23.530806   28506 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 00:10:23.530817   28506 command_runner.go:130] > # 	"containers_oom_total",
	I1128 00:10:23.530826   28506 command_runner.go:130] > # 	"containers_oom",
	I1128 00:10:23.530836   28506 command_runner.go:130] > # 	"processes_defunct",
	I1128 00:10:23.530846   28506 command_runner.go:130] > # 	"operations_total",
	I1128 00:10:23.530856   28506 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 00:10:23.530866   28506 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 00:10:23.530873   28506 command_runner.go:130] > # 	"operations_errors_total",
	I1128 00:10:23.530879   28506 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 00:10:23.530891   28506 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 00:10:23.530903   28506 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 00:10:23.530915   28506 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 00:10:23.530925   28506 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 00:10:23.530936   28506 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 00:10:23.530944   28506 command_runner.go:130] > # ]
	I1128 00:10:23.530956   28506 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 00:10:23.530965   28506 command_runner.go:130] > # metrics_port = 9090
	I1128 00:10:23.530974   28506 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 00:10:23.530983   28506 command_runner.go:130] > # metrics_socket = ""
	I1128 00:10:23.530995   28506 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 00:10:23.531009   28506 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 00:10:23.531022   28506 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 00:10:23.531033   28506 command_runner.go:130] > # certificate on any modification event.
	I1128 00:10:23.531042   28506 command_runner.go:130] > # metrics_cert = ""
	I1128 00:10:23.531054   28506 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 00:10:23.531062   28506 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 00:10:23.531071   28506 command_runner.go:130] > # metrics_key = ""
	I1128 00:10:23.531085   28506 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 00:10:23.531095   28506 command_runner.go:130] > [crio.tracing]
	I1128 00:10:23.531104   28506 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 00:10:23.531115   28506 command_runner.go:130] > # enable_tracing = false
	I1128 00:10:23.531126   28506 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 00:10:23.531137   28506 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 00:10:23.531149   28506 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 00:10:23.531159   28506 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 00:10:23.531168   28506 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 00:10:23.531177   28506 command_runner.go:130] > [crio.stats]
	I1128 00:10:23.531190   28506 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 00:10:23.531203   28506 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 00:10:23.531213   28506 command_runner.go:130] > # stats_collection_period = 0
	I1128 00:10:23.531282   28506 cni.go:84] Creating CNI manager for ""
	I1128 00:10:23.531294   28506 cni.go:136] 3 nodes found, recommending kindnet
	I1128 00:10:23.531304   28506 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:10:23.531329   28506 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-883509 NodeName:multinode-883509-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:10:23.531467   28506 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-883509-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:10:23.531521   28506 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-883509-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:10:23.531588   28506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:10:23.543003   28506 command_runner.go:130] > kubeadm
	I1128 00:10:23.543021   28506 command_runner.go:130] > kubectl
	I1128 00:10:23.543025   28506 command_runner.go:130] > kubelet
	I1128 00:10:23.543181   28506 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:10:23.543237   28506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1128 00:10:23.553795   28506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1128 00:10:23.570983   28506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:10:23.587949   28506 ssh_runner.go:195] Run: grep 192.168.39.159	control-plane.minikube.internal$ /etc/hosts
	I1128 00:10:23.591770   28506 command_runner.go:130] > 192.168.39.159	control-plane.minikube.internal
	I1128 00:10:23.591878   28506 host.go:66] Checking if "multinode-883509" exists ...
	I1128 00:10:23.592127   28506 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:10:23.592196   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:10:23.592241   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:10:23.607017   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1128 00:10:23.607430   28506 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:10:23.607859   28506 main.go:141] libmachine: Using API Version  1
	I1128 00:10:23.607884   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:10:23.608155   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:10:23.608349   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:10:23.608481   28506 start.go:304] JoinCluster: &{Name:multinode-883509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-883509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:10:23.608590   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1128 00:10:23.608603   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:10:23.611233   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:10:23.611652   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:10:23.611680   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:10:23.611821   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:10:23.611991   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:10:23.612133   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:10:23.612249   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1128 00:10:23.791805   28506 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9snp5h.4gwfpj5xc314fa34 --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:10:23.791866   28506 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1128 00:10:23.791899   28506 host.go:66] Checking if "multinode-883509" exists ...
	I1128 00:10:23.792242   28506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:10:23.792278   28506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:10:23.806742   28506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I1128 00:10:23.807166   28506 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:10:23.807683   28506 main.go:141] libmachine: Using API Version  1
	I1128 00:10:23.807707   28506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:10:23.808073   28506 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:10:23.808255   28506 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1128 00:10:23.808443   28506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-883509-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1128 00:10:23.808462   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1128 00:10:23.811314   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:10:23.811759   28506 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 01:04:45 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1128 00:10:23.811787   28506 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1128 00:10:23.811939   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1128 00:10:23.812116   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1128 00:10:23.812269   28506 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1128 00:10:23.812433   28506 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1128 00:10:24.013756   28506 command_runner.go:130] > node/multinode-883509-m03 cordoned
	I1128 00:10:27.059095   28506 command_runner.go:130] > pod "busybox-5bc68d56bd-6q5sf" has DeletionTimestamp older than 1 seconds, skipping
	I1128 00:10:27.059127   28506 command_runner.go:130] > node/multinode-883509-m03 drained
	I1128 00:10:27.060877   28506 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1128 00:10:27.060906   28506 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-xtnn9, kube-system/kube-proxy-6dvv4
	I1128 00:10:27.060950   28506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-883509-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.252464964s)
	I1128 00:10:27.060966   28506 node.go:108] successfully drained node "m03"
	I1128 00:10:27.061332   28506 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:10:27.061637   28506 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:10:27.061992   28506 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1128 00:10:27.062049   28506 round_trippers.go:463] DELETE https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:10:27.062055   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:27.062070   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:27.062082   28506 round_trippers.go:473]     Content-Type: application/json
	I1128 00:10:27.062091   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:27.074966   28506 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1128 00:10:27.074991   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:27.075001   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:27 GMT
	I1128 00:10:27.075009   28506 round_trippers.go:580]     Audit-Id: 8d3c8597-2966-44f0-8538-2148f9fd8e5f
	I1128 00:10:27.075017   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:27.075024   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:27.075029   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:27.075034   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:27.075046   28506 round_trippers.go:580]     Content-Length: 171
	I1128 00:10:27.075065   28506 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-883509-m03","kind":"nodes","uid":"2bc47ce6-2761-4c93-b9f7-cf65c531732f"}}
	I1128 00:10:27.075098   28506 node.go:124] successfully deleted node "m03"
	I1128 00:10:27.075110   28506 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1128 00:10:27.075125   28506 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1128 00:10:27.075139   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9snp5h.4gwfpj5xc314fa34 --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-883509-m03"
	I1128 00:10:27.148321   28506 command_runner.go:130] ! W1128 00:10:27.138851    2365 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1128 00:10:27.148824   28506 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1128 00:10:27.304658   28506 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1128 00:10:27.304691   28506 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1128 00:10:28.089625   28506 command_runner.go:130] > [preflight] Running pre-flight checks
	I1128 00:10:28.089646   28506 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1128 00:10:28.089655   28506 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1128 00:10:28.089663   28506 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:10:28.089670   28506 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:10:28.089675   28506 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 00:10:28.089684   28506 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1128 00:10:28.089693   28506 command_runner.go:130] > This node has joined the cluster:
	I1128 00:10:28.089705   28506 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1128 00:10:28.089713   28506 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1128 00:10:28.089724   28506 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1128 00:10:28.090132   28506 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9snp5h.4gwfpj5xc314fa34 --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-883509-m03": (1.014971949s)
	I1128 00:10:28.090157   28506 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1128 00:10:28.364839   28506 start.go:306] JoinCluster complete in 4.756353728s
	I1128 00:10:28.364870   28506 cni.go:84] Creating CNI manager for ""
	I1128 00:10:28.364878   28506 cni.go:136] 3 nodes found, recommending kindnet
	I1128 00:10:28.364933   28506 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 00:10:28.371424   28506 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 00:10:28.371451   28506 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1128 00:10:28.371460   28506 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1128 00:10:28.371469   28506 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 00:10:28.371477   28506 command_runner.go:130] > Access: 2023-11-28 00:04:46.123158983 +0000
	I1128 00:10:28.371486   28506 command_runner.go:130] > Modify: 2023-11-27 22:54:55.000000000 +0000
	I1128 00:10:28.371495   28506 command_runner.go:130] > Change: 2023-11-28 00:04:44.129158983 +0000
	I1128 00:10:28.371505   28506 command_runner.go:130] >  Birth: -
	I1128 00:10:28.371573   28506 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 00:10:28.371587   28506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 00:10:28.395178   28506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 00:10:28.758596   28506 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1128 00:10:28.764135   28506 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1128 00:10:28.766919   28506 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1128 00:10:28.782436   28506 command_runner.go:130] > daemonset.apps/kindnet configured
	I1128 00:10:28.785339   28506 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:10:28.785618   28506 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:10:28.785895   28506 round_trippers.go:463] GET https://192.168.39.159:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 00:10:28.785908   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.785916   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.785922   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.790553   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:10:28.790574   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.790582   28506 round_trippers.go:580]     Content-Length: 291
	I1128 00:10:28.790593   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.790598   28506 round_trippers.go:580]     Audit-Id: a12849f4-7269-4e1f-96d7-469ce4e34d1d
	I1128 00:10:28.790603   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.790610   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.790619   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.790628   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.790658   28506 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6e7cc9d5-ec42-4b16-9afb-9c3b43521ec6","resourceVersion":"914","creationTimestamp":"2023-11-27T23:54:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1128 00:10:28.790782   28506 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-883509" context rescaled to 1 replicas
	I1128 00:10:28.790813   28506 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.128 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1128 00:10:28.792779   28506 out.go:177] * Verifying Kubernetes components...
	I1128 00:10:28.794289   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:10:28.811001   28506 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:10:28.811250   28506 kapi.go:59] client config for multinode-883509: &rest.Config{Host:"https://192.168.39.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/multinode-883509/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:10:28.811454   28506 node_ready.go:35] waiting up to 6m0s for node "multinode-883509-m03" to be "Ready" ...
	I1128 00:10:28.811508   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:10:28.811515   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.811523   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.811529   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.814058   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:28.814082   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.814091   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.814100   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.814109   28506 round_trippers.go:580]     Audit-Id: 77df2d7a-d6bc-43a7-a019-846940bcf156
	I1128 00:10:28.814122   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.814132   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.814145   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.814710   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m03","uid":"153f6806-1c7a-4bdc-af33-53dcf9bdc333","resourceVersion":"1330","creationTimestamp":"2023-11-28T00:10:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:10:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:10:27Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vol [truncated 3442 chars]
	I1128 00:10:28.814967   28506 node_ready.go:49] node "multinode-883509-m03" has status "Ready":"True"
	I1128 00:10:28.814982   28506 node_ready.go:38] duration metric: took 3.515355ms waiting for node "multinode-883509-m03" to be "Ready" ...
	I1128 00:10:28.814990   28506 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:10:28.815037   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods
	I1128 00:10:28.815048   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.815055   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.815061   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.819089   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:10:28.819104   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.819110   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.819118   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.819123   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.819130   28506 round_trippers.go:580]     Audit-Id: 1154cae8-0e93-466f-888a-6312194f14f7
	I1128 00:10:28.819138   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.819147   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.820646   28506 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1338"},"items":[{"metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"910","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82079 chars]
	I1128 00:10:28.823114   28506 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:28.823188   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9vws5
	I1128 00:10:28.823199   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.823209   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.823219   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.825465   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:28.825481   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.825486   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.825492   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.825497   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.825502   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.825508   28506 round_trippers.go:580]     Audit-Id: 9596dbb7-0886-4c02-bcbf-bbb1c28c219e
	I1128 00:10:28.825513   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.825666   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-9vws5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"66ac3c18-9997-49aa-a154-ade69c138f12","resourceVersion":"910","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a367bfb3-a8ac-43bd-87d7-5e00ecbff652","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a367bfb3-a8ac-43bd-87d7-5e00ecbff652\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1128 00:10:28.826047   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:10:28.826060   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.826070   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.826078   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.828030   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:10:28.828047   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.828056   28506 round_trippers.go:580]     Audit-Id: 86f584ba-0b65-4c40-bb96-b750633ef610
	I1128 00:10:28.828062   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.828067   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.828073   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.828079   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.828084   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.828611   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:10:28.828900   28506 pod_ready.go:92] pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace has status "Ready":"True"
	I1128 00:10:28.828915   28506 pod_ready.go:81] duration metric: took 5.77944ms waiting for pod "coredns-5dd5756b68-9vws5" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:28.828922   28506 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:28.828958   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-883509
	I1128 00:10:28.828965   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.828972   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.828978   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.831366   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:28.831386   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.831396   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.831409   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.831433   28506 round_trippers.go:580]     Audit-Id: 3e103862-b1a8-42f2-b9af-5fc6a2df6717
	I1128 00:10:28.831446   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.831454   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.831465   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.831581   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-883509","namespace":"kube-system","uid":"58bb8943-0a7c-4d4c-a090-ea8de587f504","resourceVersion":"887","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.159:2379","kubernetes.io/config.hash":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.mirror":"8d23c211c8738dad6e022e03cd2c9ea7","kubernetes.io/config.seen":"2023-11-27T23:54:53.116542435Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1128 00:10:28.831991   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:10:28.832009   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.832019   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.832034   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.834927   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:28.834939   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.834945   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.834950   28506 round_trippers.go:580]     Audit-Id: bb1b7804-1f75-4aa8-9b1f-bf09b2bbde07
	I1128 00:10:28.834955   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.834960   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.834965   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.834970   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.835384   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:10:28.835629   28506 pod_ready.go:92] pod "etcd-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:10:28.835641   28506 pod_ready.go:81] duration metric: took 6.713837ms waiting for pod "etcd-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:28.835654   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:28.835691   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-883509
	I1128 00:10:28.835698   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.835705   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.835710   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.837769   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:28.837785   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.837794   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.837802   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.837811   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.837822   28506 round_trippers.go:580]     Audit-Id: 9ba82702-d89b-43d3-8ee3-1548c94e9772
	I1128 00:10:28.837832   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.837842   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.837988   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-883509","namespace":"kube-system","uid":"0a144c07-5db8-418a-ad15-110fabc7f377","resourceVersion":"880","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.159:8443","kubernetes.io/config.hash":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.mirror":"3b5e7b5fdb84862f46e6248e54c84795","kubernetes.io/config.seen":"2023-11-27T23:54:53.116543447Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1128 00:10:28.838288   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:10:28.838300   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.838306   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.838312   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.839874   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:10:28.839891   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.839899   28506 round_trippers.go:580]     Audit-Id: 656400f2-bca6-44c3-853d-6c38a94112d5
	I1128 00:10:28.839907   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.839916   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.839929   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.839935   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.839940   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.840138   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:10:28.840407   28506 pod_ready.go:92] pod "kube-apiserver-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:10:28.840420   28506 pod_ready.go:81] duration metric: took 4.758392ms waiting for pod "kube-apiserver-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:28.840427   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:28.840461   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-883509
	I1128 00:10:28.840468   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.840475   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.840481   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.842320   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:10:28.842334   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.842343   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.842352   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.842360   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.842370   28506 round_trippers.go:580]     Audit-Id: 38dcf165-a8d0-4f1a-aee5-9b3b56ebe18c
	I1128 00:10:28.842379   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.842394   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.842664   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-883509","namespace":"kube-system","uid":"f8474e48-c333-4772-ae1f-59cdb2bf95eb","resourceVersion":"882","creationTimestamp":"2023-11-27T23:54:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.mirror":"de58e44a016d081ac103af6880ca64f0","kubernetes.io/config.seen":"2023-11-27T23:54:53.116544230Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1128 00:10:28.842960   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:10:28.842973   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:28.842983   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:28.842992   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:28.844705   28506 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 00:10:28.844722   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:28.844730   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:28 GMT
	I1128 00:10:28.844739   28506 round_trippers.go:580]     Audit-Id: 13268086-5235-4312-adfa-cf54196e2de3
	I1128 00:10:28.844762   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:28.844771   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:28.844788   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:28.844796   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:28.845035   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:10:28.845284   28506 pod_ready.go:92] pod "kube-controller-manager-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:10:28.845298   28506 pod_ready.go:81] duration metric: took 4.864841ms waiting for pod "kube-controller-manager-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:28.845310   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6dvv4" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:29.012484   28506 request.go:629] Waited for 167.12439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:10:29.012563   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:10:29.012570   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:29.012581   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:29.012595   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:29.015705   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:10:29.015736   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:29.015748   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:29.015758   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:29.015768   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:29.015778   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:29.015788   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:29 GMT
	I1128 00:10:29.015798   28506 round_trippers.go:580]     Audit-Id: 6aa79859-9541-4574-b443-bbe7fd57259f
	I1128 00:10:29.016366   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6dvv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c6651c7d-33a2-4a46-9d73-e60ee19557fa","resourceVersion":"1333","creationTimestamp":"2023-11-27T23:56:37Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:56:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1128 00:10:29.212290   28506 request.go:629] Waited for 195.423704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:10:29.212362   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:10:29.212369   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:29.212380   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:29.212403   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:29.215060   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:29.215080   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:29.215092   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:29 GMT
	I1128 00:10:29.215099   28506 round_trippers.go:580]     Audit-Id: 97ef27df-0f44-4b6a-afe8-c1563c13848a
	I1128 00:10:29.215104   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:29.215110   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:29.215114   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:29.215120   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:29.215490   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m03","uid":"153f6806-1c7a-4bdc-af33-53dcf9bdc333","resourceVersion":"1330","creationTimestamp":"2023-11-28T00:10:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:10:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:10:27Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vol [truncated 3442 chars]
	I1128 00:10:29.412276   28506 request.go:629] Waited for 196.470752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:10:29.412363   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:10:29.412371   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:29.412382   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:29.412397   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:29.416022   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:10:29.416049   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:29.416059   28506 round_trippers.go:580]     Audit-Id: 14402806-2351-4849-be06-5b2a8dcff5fa
	I1128 00:10:29.416067   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:29.416081   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:29.416089   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:29.416098   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:29.416112   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:29 GMT
	I1128 00:10:29.416444   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6dvv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c6651c7d-33a2-4a46-9d73-e60ee19557fa","resourceVersion":"1333","creationTimestamp":"2023-11-27T23:56:37Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:56:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1128 00:10:29.612324   28506 request.go:629] Waited for 195.356891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:10:29.612392   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:10:29.612399   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:29.612407   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:29.612415   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:29.614942   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:29.614959   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:29.614966   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:29.614976   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:29.614981   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:29.614986   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:29 GMT
	I1128 00:10:29.614991   28506 round_trippers.go:580]     Audit-Id: 5ab9d7a0-0098-48d3-b2e8-46af2288763f
	I1128 00:10:29.614998   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:29.615145   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m03","uid":"153f6806-1c7a-4bdc-af33-53dcf9bdc333","resourceVersion":"1330","creationTimestamp":"2023-11-28T00:10:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:10:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:10:27Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vol [truncated 3442 chars]
	I1128 00:10:30.116162   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6dvv4
	I1128 00:10:30.116185   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:30.116193   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:30.116199   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:30.119716   28506 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 00:10:30.119753   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:30.119763   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:30.119790   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:30.119802   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:30 GMT
	I1128 00:10:30.119811   28506 round_trippers.go:580]     Audit-Id: 57a512fa-f766-4862-ad95-4ff3ffdd243e
	I1128 00:10:30.119822   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:30.119834   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:30.120290   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6dvv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c6651c7d-33a2-4a46-9d73-e60ee19557fa","resourceVersion":"1350","creationTimestamp":"2023-11-27T23:56:37Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:56:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1128 00:10:30.120689   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m03
	I1128 00:10:30.120707   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:30.120717   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:30.120726   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:30.122883   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:30.122904   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:30.122912   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:30.122920   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:30.122932   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:30 GMT
	I1128 00:10:30.122941   28506 round_trippers.go:580]     Audit-Id: d5ae95ba-0074-4def-b0b1-1e49144fdb1c
	I1128 00:10:30.122953   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:30.122985   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:30.123142   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m03","uid":"153f6806-1c7a-4bdc-af33-53dcf9bdc333","resourceVersion":"1330","creationTimestamp":"2023-11-28T00:10:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:10:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:10:27Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vol [truncated 3442 chars]
	I1128 00:10:30.123450   28506 pod_ready.go:92] pod "kube-proxy-6dvv4" in "kube-system" namespace has status "Ready":"True"
	I1128 00:10:30.123471   28506 pod_ready.go:81] duration metric: took 1.278152451s waiting for pod "kube-proxy-6dvv4" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:30.123483   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:30.212552   28506 request.go:629] Waited for 89.010848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1128 00:10:30.212603   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7g246
	I1128 00:10:30.212608   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:30.212615   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:30.212621   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:30.217858   28506 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 00:10:30.217883   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:30.217893   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:30.217901   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:30.217909   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:30 GMT
	I1128 00:10:30.217918   28506 round_trippers.go:580]     Audit-Id: ae479f10-ac75-43c5-9304-97865f5cc3e7
	I1128 00:10:30.217934   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:30.217941   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:30.218305   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7g246","generateName":"kube-proxy-","namespace":"kube-system","uid":"c03a2053-f013-4269-a5e1-0acfebfc606c","resourceVersion":"810","creationTimestamp":"2023-11-27T23:55:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1128 00:10:30.412045   28506 request.go:629] Waited for 193.361606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:10:30.412098   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:10:30.412114   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:30.412122   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:30.412128   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:30.416628   28506 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 00:10:30.416651   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:30.416658   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:30.416664   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:30 GMT
	I1128 00:10:30.416671   28506 round_trippers.go:580]     Audit-Id: 6595266d-5da3-4fc6-a32c-fc763d13d6e5
	I1128 00:10:30.416679   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:30.416686   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:30.416694   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:30.417775   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:10:30.418064   28506 pod_ready.go:92] pod "kube-proxy-7g246" in "kube-system" namespace has status "Ready":"True"
	I1128 00:10:30.418078   28506 pod_ready.go:81] duration metric: took 294.582405ms waiting for pod "kube-proxy-7g246" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:30.418094   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:30.612535   28506 request.go:629] Waited for 194.373208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1128 00:10:30.612601   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvsj6
	I1128 00:10:30.612606   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:30.612614   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:30.612621   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:30.615243   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:30.615268   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:30.615278   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:30.615287   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:30 GMT
	I1128 00:10:30.615294   28506 round_trippers.go:580]     Audit-Id: 81bbde20-77d0-49e7-8a2c-2e60e715e68a
	I1128 00:10:30.615301   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:30.615312   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:30.615319   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:30.615596   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fvsj6","generateName":"kube-proxy-","namespace":"kube-system","uid":"d0e7a02e-868c-4774-885c-8b5ad728f451","resourceVersion":"1175","creationTimestamp":"2023-11-27T23:55:46Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"dea68644-28a8-4da5-b7c7-c0035d2ae817","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:55:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dea68644-28a8-4da5-b7c7-c0035d2ae817\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1128 00:10:30.812450   28506 request.go:629] Waited for 196.348777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:10:30.812501   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509-m02
	I1128 00:10:30.812505   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:30.812513   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:30.812522   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:30.814831   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:30.814856   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:30.814867   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:30.814874   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:30.814882   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:30.814899   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:30 GMT
	I1128 00:10:30.814906   28506 round_trippers.go:580]     Audit-Id: 9f9aad13-7e02-4314-8be9-87ce675386d1
	I1128 00:10:30.814915   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:30.815122   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509-m02","uid":"d053a818-316c-479c-8722-1b9e01fced24","resourceVersion":"1155","creationTimestamp":"2023-11-28T00:08:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:08:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T00:08:46Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1128 00:10:30.815414   28506 pod_ready.go:92] pod "kube-proxy-fvsj6" in "kube-system" namespace has status "Ready":"True"
	I1128 00:10:30.815430   28506 pod_ready.go:81] duration metric: took 397.324578ms waiting for pod "kube-proxy-fvsj6" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:30.815438   28506 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:31.011864   28506 request.go:629] Waited for 196.366623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1128 00:10:31.011933   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-883509
	I1128 00:10:31.011940   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:31.011948   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:31.011955   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:31.014614   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:31.014631   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:31.014638   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:31.014643   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:31.014656   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:31.014675   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:31 GMT
	I1128 00:10:31.014684   28506 round_trippers.go:580]     Audit-Id: 889c7366-9e40-4c77-948f-befebf97ff0c
	I1128 00:10:31.014693   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:31.015163   28506 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-883509","namespace":"kube-system","uid":"191f6a8c-7604-4f03-ba5a-d717b27f634b","resourceVersion":"902","creationTimestamp":"2023-11-27T23:54:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.mirror":"f3690327bcacf0b7b0b21542aa013461","kubernetes.io/config.seen":"2023-11-27T23:54:44.598174974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-27T23:54:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1128 00:10:31.212012   28506 request.go:629] Waited for 196.507844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:10:31.212080   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes/multinode-883509
	I1128 00:10:31.212086   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:31.212097   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:31.212106   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:31.214655   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:31.214677   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:31.214688   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:31.214696   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:31.214705   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:31.214715   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:31.214723   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:31 GMT
	I1128 00:10:31.214730   28506 round_trippers.go:580]     Audit-Id: 7778fac3-e3fe-4ddf-ab92-8f3b754cfae9
	I1128 00:10:31.214958   28506 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-27T23:54:49Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1128 00:10:31.215283   28506 pod_ready.go:92] pod "kube-scheduler-multinode-883509" in "kube-system" namespace has status "Ready":"True"
	I1128 00:10:31.215298   28506 pod_ready.go:81] duration metric: took 399.854653ms waiting for pod "kube-scheduler-multinode-883509" in "kube-system" namespace to be "Ready" ...
	I1128 00:10:31.215310   28506 pod_ready.go:38] duration metric: took 2.400308974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:10:31.215329   28506 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:10:31.215371   28506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:10:31.228935   28506 system_svc.go:56] duration metric: took 13.601688ms WaitForService to wait for kubelet.
	I1128 00:10:31.228954   28506 kubeadm.go:581] duration metric: took 2.43811485s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:10:31.228970   28506 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:10:31.412390   28506 request.go:629] Waited for 183.338719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.159:8443/api/v1/nodes
	I1128 00:10:31.412456   28506 round_trippers.go:463] GET https://192.168.39.159:8443/api/v1/nodes
	I1128 00:10:31.412463   28506 round_trippers.go:469] Request Headers:
	I1128 00:10:31.412475   28506 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 00:10:31.412487   28506 round_trippers.go:473]     Accept: application/json, */*
	I1128 00:10:31.415320   28506 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 00:10:31.415348   28506 round_trippers.go:577] Response Headers:
	I1128 00:10:31.415356   28506 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 66392bc2-eee0-48c4-860d-58a861948e7d
	I1128 00:10:31.415363   28506 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9fd19551-b875-4ab9-ae88-6215be0b4bb9
	I1128 00:10:31.415372   28506 round_trippers.go:580]     Date: Tue, 28 Nov 2023 00:10:31 GMT
	I1128 00:10:31.415379   28506 round_trippers.go:580]     Audit-Id: 2eea3f3b-6abb-44bd-8c28-35b353468b96
	I1128 00:10:31.415387   28506 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 00:10:31.415396   28506 round_trippers.go:580]     Content-Type: application/json
	I1128 00:10:31.415762   28506 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1353"},"items":[{"metadata":{"name":"multinode-883509","uid":"91633a1c-c015-4c2f-9f77-de1a8f570727","resourceVersion":"930","creationTimestamp":"2023-11-27T23:54:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-883509","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c8086df69b157f30f19d083fe45cc014f102df45","minikube.k8s.io/name":"multinode-883509","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_27T23_54_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15134 chars]
	I1128 00:10:31.416505   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:10:31.416528   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:10:31.416541   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:10:31.416548   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:10:31.416555   28506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:10:31.416561   28506 node_conditions.go:123] node cpu capacity is 2
	I1128 00:10:31.416569   28506 node_conditions.go:105] duration metric: took 187.592888ms to run NodePressure ...
	I1128 00:10:31.416587   28506 start.go:228] waiting for startup goroutines ...
	I1128 00:10:31.416613   28506 start.go:242] writing updated cluster config ...
	I1128 00:10:31.416975   28506 ssh_runner.go:195] Run: rm -f paused
	I1128 00:10:31.465004   28506 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:10:31.467019   28506 out.go:177] * Done! kubectl is now configured to use "multinode-883509" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:04:44 UTC, ends at Tue 2023-11-28 00:10:32 UTC. --
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.583875671Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fc7a1b89441c18d49afe17b478683f763fd29f0e1b39b1b95078f37f406e605c,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-9vws5,Uid:66ac3c18-9997-49aa-a154-ade69c138f12,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701129932631496912,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:05:16.521732221Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9b2349fd6e1fd460222b1006f62004d3182aeac7f03b8e989045800b2d00d4a,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-9qz8x,Uid:1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,Namespace:default,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1701129932622480372,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:05:16.521721500Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c137570b9187e10672c57ed40777d53304cb050364088f49e573ea1f69340b2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e59cdfcb-f7c6-4be9-a2e1-0931d582343c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701129916893393959,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]st
ring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-28T00:05:16.521735295Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d4dd3ce23acbd98c998f4c66ec34992489f122da4ff04addaaf037cb7d7eae1,Metadata:&PodSandboxMetadata{Name:kube-proxy-7g246,Uid:c03a2053-f013-4269-a5e1-0acfebfc606c,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1701129916870851475,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfebfc606c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:05:16.521734231Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c0a7d792e97c22e77df63cd6edc75527c01c07d3a54b5e7781f5b0d0ae0319f,Metadata:&PodSandboxMetadata{Name:kindnet-ztt77,Uid:acbfe061-9a56-4999-baed-ef8d73dc9222,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701129916866848323,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acbfe061-9a56-4999-baed-ef8d73dc9222,k8s-app: kindnet,pod-template-genera
tion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:05:16.521728303Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:710c1d70b74936592210819a16903bb36d6499a80ef77c5a33fa12c25c20b2a0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-883509,Uid:f3690327bcacf0b7b0b21542aa013461,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701129911094698538,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f3690327bcacf0b7b0b21542aa013461,kubernetes.io/config.seen: 2023-11-28T00:05:10.512622761Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d23950ae83bc7d47f97c5b7f1c698454da894c9b6003a23593061e1910a7b431,Metadata:&PodSandboxMetadata{Name:kube-apiserver-mult
inode-883509,Uid:3b5e7b5fdb84862f46e6248e54c84795,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701129911077410142,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.159:8443,kubernetes.io/config.hash: 3b5e7b5fdb84862f46e6248e54c84795,kubernetes.io/config.seen: 2023-11-28T00:05:10.512620963Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:176d60e485c3fc6911c45745d16e33f196ed025c35658ae6abe0c2456dd0966e,Metadata:&PodSandboxMetadata{Name:etcd-multinode-883509,Uid:8d23c211c8738dad6e022e03cd2c9ea7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701129911053495980,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernet
es.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.159:2379,kubernetes.io/config.hash: 8d23c211c8738dad6e022e03cd2c9ea7,kubernetes.io/config.seen: 2023-11-28T00:05:10.512617107Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:526cce0cba2752a1f336bbecd7f61581fb601fa454ac8da0deb3715e0a514a3f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-883509,Uid:de58e44a016d081ac103af6880ca64f0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701129911039569697,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,tier: control-plane,},Annotations:map[string]string{kubern
etes.io/config.hash: de58e44a016d081ac103af6880ca64f0,kubernetes.io/config.seen: 2023-11-28T00:05:10.512621948Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=96754277-792b-4d71-9424-7d26651863d3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.584804646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1adf23a0-df5a-4f26-86d6-dfc943b20adb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.584879764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1adf23a0-df5a-4f26-86d6-dfc943b20adb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.585060900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddc25d69ef84ec1f27870115ef805a63a56ceedff33531583ee290b3ac67a03a,PodSandboxId:1c137570b9187e10672c57ed40777d53304cb050364088f49e573ea1f69340b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701129947789888280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89079730feb228bf352a4029d436c109c6ab86be1e720d8ced93904ae66e489f,PodSandboxId:d9b2349fd6e1fd460222b1006f62004d3182aeac7f03b8e989045800b2d00d4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701129936054625259,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: e95f39a2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d163d4ba6e549b22c3ca52ebba86d1d50f720b0e0870b473b7f5d7abd9ec1,PodSandboxId:fc7a1b89441c18d49afe17b478683f763fd29f0e1b39b1b95078f37f406e605c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701129933296406643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,},Annotations:map[string]string{io.kubernetes.container.hash: d384be83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db1d8dbd9a4d229badfe5f020c196b8eb292bf250711fe0a656073da8975787,PodSandboxId:6c0a7d792e97c22e77df63cd6edc75527c01c07d3a54b5e7781f5b0d0ae0319f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701129920156357927,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: acbfe061-9a56-4999-baed-ef8d73dc9222,},Annotations:map[string]string{io.kubernetes.container.hash: 78700d1a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a1f086fd643f010eaedbf56d3fb2c51c5bddf6c4cba72343f3d9cf5d343f34e,PodSandboxId:6d4dd3ce23acbd98c998f4c66ec34992489f122da4ff04addaaf037cb7d7eae1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701129917590821475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfeb
fc606c,},Annotations:map[string]string{io.kubernetes.container.hash: da634c38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bff0e07a92a879ae5ae63728a407a6b2ea9ce7faab6a90e8981c46e4a787fbe,PodSandboxId:176d60e485c3fc6911c45745d16e33f196ed025c35658ae6abe0c2456dd0966e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701129912114738895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,},Annotations:map[string]string{io.kubernetes
.container.hash: a686aad8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0278b2c2e8413d4e2144f2a31bc0268fa5296cd15be586194b49125bd9c36aec,PodSandboxId:710c1d70b74936592210819a16903bb36d6499a80ef77c5a33fa12c25c20b2a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701129911760063211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c66474e913f0f2eb58549efe677e090aa78caf50bbfa494d194d9127d79111,PodSandboxId:d23950ae83bc7d47f97c5b7f1c698454da894c9b6003a23593061e1910a7b431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701129911705356026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf80b1508f45da73c41961a9893c179b7570326e477a46dbddf16f195714da65,PodSandboxId:526cce0cba2752a1f336bbecd7f61581fb601fa454ac8da0deb3715e0a514a3f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701129911493420316,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,},Annotations:map[string]string{io.kubernetes
.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1adf23a0-df5a-4f26-86d6-dfc943b20adb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.585505286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=103ba9e5-5387-4d49-a724-65887d7be3dd name=/runtime.v1.RuntimeService/Version
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.585594798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=103ba9e5-5387-4d49-a724-65887d7be3dd name=/runtime.v1.RuntimeService/Version
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.587405596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=18cfac22-93de-4da5-af2a-1f9a5aa93085 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.587820010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701130232587802487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=18cfac22-93de-4da5-af2a-1f9a5aa93085 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.588731015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2302f3d8-068c-4918-82af-a5de6a929523 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.588794000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2302f3d8-068c-4918-82af-a5de6a929523 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.588970955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddc25d69ef84ec1f27870115ef805a63a56ceedff33531583ee290b3ac67a03a,PodSandboxId:1c137570b9187e10672c57ed40777d53304cb050364088f49e573ea1f69340b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701129947789888280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89079730feb228bf352a4029d436c109c6ab86be1e720d8ced93904ae66e489f,PodSandboxId:d9b2349fd6e1fd460222b1006f62004d3182aeac7f03b8e989045800b2d00d4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701129936054625259,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: e95f39a2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d163d4ba6e549b22c3ca52ebba86d1d50f720b0e0870b473b7f5d7abd9ec1,PodSandboxId:fc7a1b89441c18d49afe17b478683f763fd29f0e1b39b1b95078f37f406e605c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701129933296406643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,},Annotations:map[string]string{io.kubernetes.container.hash: d384be83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db1d8dbd9a4d229badfe5f020c196b8eb292bf250711fe0a656073da8975787,PodSandboxId:6c0a7d792e97c22e77df63cd6edc75527c01c07d3a54b5e7781f5b0d0ae0319f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701129920156357927,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: acbfe061-9a56-4999-baed-ef8d73dc9222,},Annotations:map[string]string{io.kubernetes.container.hash: 78700d1a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a1f086fd643f010eaedbf56d3fb2c51c5bddf6c4cba72343f3d9cf5d343f34e,PodSandboxId:6d4dd3ce23acbd98c998f4c66ec34992489f122da4ff04addaaf037cb7d7eae1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701129917590821475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfeb
fc606c,},Annotations:map[string]string{io.kubernetes.container.hash: da634c38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e2d43cb5432553149cd46161344c6929a552d48a057e3e61b2703177a22fe4,PodSandboxId:1c137570b9187e10672c57ed40777d53304cb050364088f49e573ea1f69340b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701129917489033910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582
343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bff0e07a92a879ae5ae63728a407a6b2ea9ce7faab6a90e8981c46e4a787fbe,PodSandboxId:176d60e485c3fc6911c45745d16e33f196ed025c35658ae6abe0c2456dd0966e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701129912114738895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a686aad8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0278b2c2e8413d4e2144f2a31bc0268fa5296cd15be586194b49125bd9c36aec,PodSandboxId:710c1d70b74936592210819a16903bb36d6499a80ef77c5a33fa12c25c20b2a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701129911760063211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c66474e913f0f2eb58549efe677e090aa78caf50bbfa494d194d9127d79111,PodSandboxId:d23950ae83bc7d47f97c5b7f1c698454da894c9b6003a23593061e1910a7b431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701129911705356026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279e,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf80b1508f45da73c41961a9893c179b7570326e477a46dbddf16f195714da65,PodSandboxId:526cce0cba2752a1f336bbecd7f61581fb601fa454ac8da0deb3715e0a514a3f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701129911493420316,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2302f3d8-068c-4918-82af-a5de6a929523 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.629020564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=71a36d26-3ed5-4988-9c7f-d89e049ff0d8 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.629111363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=71a36d26-3ed5-4988-9c7f-d89e049ff0d8 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.630300401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8e8ffcf1-35e8-4d5b-bf2f-036e5f95d50b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.630680974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701130232630668150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8e8ffcf1-35e8-4d5b-bf2f-036e5f95d50b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.631111631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f888708b-100a-4910-99a9-d5c5b9ba03c8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.631237971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f888708b-100a-4910-99a9-d5c5b9ba03c8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.631426323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddc25d69ef84ec1f27870115ef805a63a56ceedff33531583ee290b3ac67a03a,PodSandboxId:1c137570b9187e10672c57ed40777d53304cb050364088f49e573ea1f69340b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701129947789888280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89079730feb228bf352a4029d436c109c6ab86be1e720d8ced93904ae66e489f,PodSandboxId:d9b2349fd6e1fd460222b1006f62004d3182aeac7f03b8e989045800b2d00d4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701129936054625259,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: e95f39a2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d163d4ba6e549b22c3ca52ebba86d1d50f720b0e0870b473b7f5d7abd9ec1,PodSandboxId:fc7a1b89441c18d49afe17b478683f763fd29f0e1b39b1b95078f37f406e605c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701129933296406643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,},Annotations:map[string]string{io.kubernetes.container.hash: d384be83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db1d8dbd9a4d229badfe5f020c196b8eb292bf250711fe0a656073da8975787,PodSandboxId:6c0a7d792e97c22e77df63cd6edc75527c01c07d3a54b5e7781f5b0d0ae0319f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701129920156357927,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: acbfe061-9a56-4999-baed-ef8d73dc9222,},Annotations:map[string]string{io.kubernetes.container.hash: 78700d1a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a1f086fd643f010eaedbf56d3fb2c51c5bddf6c4cba72343f3d9cf5d343f34e,PodSandboxId:6d4dd3ce23acbd98c998f4c66ec34992489f122da4ff04addaaf037cb7d7eae1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701129917590821475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfeb
fc606c,},Annotations:map[string]string{io.kubernetes.container.hash: da634c38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e2d43cb5432553149cd46161344c6929a552d48a057e3e61b2703177a22fe4,PodSandboxId:1c137570b9187e10672c57ed40777d53304cb050364088f49e573ea1f69340b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701129917489033910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582
343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bff0e07a92a879ae5ae63728a407a6b2ea9ce7faab6a90e8981c46e4a787fbe,PodSandboxId:176d60e485c3fc6911c45745d16e33f196ed025c35658ae6abe0c2456dd0966e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701129912114738895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a686aad8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0278b2c2e8413d4e2144f2a31bc0268fa5296cd15be586194b49125bd9c36aec,PodSandboxId:710c1d70b74936592210819a16903bb36d6499a80ef77c5a33fa12c25c20b2a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701129911760063211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c66474e913f0f2eb58549efe677e090aa78caf50bbfa494d194d9127d79111,PodSandboxId:d23950ae83bc7d47f97c5b7f1c698454da894c9b6003a23593061e1910a7b431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701129911705356026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279e,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf80b1508f45da73c41961a9893c179b7570326e477a46dbddf16f195714da65,PodSandboxId:526cce0cba2752a1f336bbecd7f61581fb601fa454ac8da0deb3715e0a514a3f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701129911493420316,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f888708b-100a-4910-99a9-d5c5b9ba03c8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.674782524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=20e00f7c-8de8-4e7e-a147-fd89783b6afe name=/runtime.v1.RuntimeService/Version
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.674867188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=20e00f7c-8de8-4e7e-a147-fd89783b6afe name=/runtime.v1.RuntimeService/Version
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.676632934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e957da31-a612-46fc-ac67-7542cc5b50a9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.677019755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701130232677004459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e957da31-a612-46fc-ac67-7542cc5b50a9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.677866287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=68c31679-c123-446e-925c-3aebd416ce2f name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.677932242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=68c31679-c123-446e-925c-3aebd416ce2f name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:10:32 multinode-883509 crio[712]: time="2023-11-28 00:10:32.678129760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ddc25d69ef84ec1f27870115ef805a63a56ceedff33531583ee290b3ac67a03a,PodSandboxId:1c137570b9187e10672c57ed40777d53304cb050364088f49e573ea1f69340b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701129947789888280,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89079730feb228bf352a4029d436c109c6ab86be1e720d8ced93904ae66e489f,PodSandboxId:d9b2349fd6e1fd460222b1006f62004d3182aeac7f03b8e989045800b2d00d4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701129936054625259,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-9qz8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d66953d-2cb8-45f7-a90b-c03b40f3fa0e,},Annotations:map[string]string{io.kubernetes.container.hash: e95f39a2,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d163d4ba6e549b22c3ca52ebba86d1d50f720b0e0870b473b7f5d7abd9ec1,PodSandboxId:fc7a1b89441c18d49afe17b478683f763fd29f0e1b39b1b95078f37f406e605c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701129933296406643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9vws5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ac3c18-9997-49aa-a154-ade69c138f12,},Annotations:map[string]string{io.kubernetes.container.hash: d384be83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db1d8dbd9a4d229badfe5f020c196b8eb292bf250711fe0a656073da8975787,PodSandboxId:6c0a7d792e97c22e77df63cd6edc75527c01c07d3a54b5e7781f5b0d0ae0319f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701129920156357927,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-ztt77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: acbfe061-9a56-4999-baed-ef8d73dc9222,},Annotations:map[string]string{io.kubernetes.container.hash: 78700d1a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a1f086fd643f010eaedbf56d3fb2c51c5bddf6c4cba72343f3d9cf5d343f34e,PodSandboxId:6d4dd3ce23acbd98c998f4c66ec34992489f122da4ff04addaaf037cb7d7eae1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701129917590821475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03a2053-f013-4269-a5e1-0acfeb
fc606c,},Annotations:map[string]string{io.kubernetes.container.hash: da634c38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93e2d43cb5432553149cd46161344c6929a552d48a057e3e61b2703177a22fe4,PodSandboxId:1c137570b9187e10672c57ed40777d53304cb050364088f49e573ea1f69340b2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701129917489033910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59cdfcb-f7c6-4be9-a2e1-0931d582
343c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7c57c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bff0e07a92a879ae5ae63728a407a6b2ea9ce7faab6a90e8981c46e4a787fbe,PodSandboxId:176d60e485c3fc6911c45745d16e33f196ed025c35658ae6abe0c2456dd0966e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701129912114738895,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d23c211c8738dad6e022e03cd2c9ea7,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: a686aad8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0278b2c2e8413d4e2144f2a31bc0268fa5296cd15be586194b49125bd9c36aec,PodSandboxId:710c1d70b74936592210819a16903bb36d6499a80ef77c5a33fa12c25c20b2a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701129911760063211,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3690327bcacf0b7b0b21542aa013461,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c66474e913f0f2eb58549efe677e090aa78caf50bbfa494d194d9127d79111,PodSandboxId:d23950ae83bc7d47f97c5b7f1c698454da894c9b6003a23593061e1910a7b431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701129911705356026,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5e7b5fdb84862f46e6248e54c84795,},Annotations:map[string]string{io.kubernetes.container.hash: 5292279e,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf80b1508f45da73c41961a9893c179b7570326e477a46dbddf16f195714da65,PodSandboxId:526cce0cba2752a1f336bbecd7f61581fb601fa454ac8da0deb3715e0a514a3f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701129911493420316,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-883509,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de58e44a016d081ac103af6880ca64f0,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=68c31679-c123-446e-925c-3aebd416ce2f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ddc25d69ef84e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       2                   1c137570b9187       storage-provisioner
	89079730feb22       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   1                   d9b2349fd6e1f       busybox-5bc68d56bd-9qz8x
	063d163d4ba6e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      4 minutes ago       Running             coredns                   1                   fc7a1b89441c1       coredns-5dd5756b68-9vws5
	7db1d8dbd9a4d       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      5 minutes ago       Running             kindnet-cni               1                   6c0a7d792e97c       kindnet-ztt77
	1a1f086fd643f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago       Running             kube-proxy                1                   6d4dd3ce23acb       kube-proxy-7g246
	93e2d43cb5432       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       1                   1c137570b9187       storage-provisioner
	5bff0e07a92a8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago       Running             etcd                      1                   176d60e485c3f       etcd-multinode-883509
	0278b2c2e8413       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago       Running             kube-scheduler            1                   710c1d70b7493       kube-scheduler-multinode-883509
	59c66474e913f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Running             kube-apiserver            1                   d23950ae83bc7       kube-apiserver-multinode-883509
	bf80b1508f45d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Running             kube-controller-manager   1                   526cce0cba275       kube-controller-manager-multinode-883509
	
	* 
	* ==> coredns [063d163d4ba6e549b22c3ca52ebba86d1d50f720b0e0870b473b7f5d7abd9ec1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56181 - 26441 "HINFO IN 3286565602190285867.1911979423281815383. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01472899s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-883509
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-883509
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=multinode-883509
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_54_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:54:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-883509
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 00:10:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:05:47 +0000   Mon, 27 Nov 2023 23:54:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:05:47 +0000   Mon, 27 Nov 2023 23:54:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:05:47 +0000   Mon, 27 Nov 2023 23:54:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:05:47 +0000   Tue, 28 Nov 2023 00:05:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    multinode-883509
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c49f7519e69445bb7f042f71f13e7f0
	  System UUID:                6c49f751-9e69-445b-b7f0-42f71f13e7f0
	  Boot ID:                    0e39d7b5-e36b-48c9-bfa9-7353328656f4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-9qz8x                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-5dd5756b68-9vws5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-multinode-883509                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-ztt77                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-883509             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-multinode-883509    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-7g246                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-883509             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node multinode-883509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node multinode-883509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node multinode-883509 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     15m                    kubelet          Node multinode-883509 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  15m                    kubelet          Node multinode-883509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                    kubelet          Node multinode-883509 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           15m                    node-controller  Node multinode-883509 event: Registered Node multinode-883509 in Controller
	  Normal  NodeReady                15m                    kubelet          Node multinode-883509 status is now: NodeReady
	  Normal  Starting                 5m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node multinode-883509 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node multinode-883509 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node multinode-883509 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m4s                   node-controller  Node multinode-883509 event: Registered Node multinode-883509 in Controller
	
	
	Name:               multinode-883509-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-883509-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:08:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-883509-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 00:10:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:08:46 +0000   Tue, 28 Nov 2023 00:08:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:08:46 +0000   Tue, 28 Nov 2023 00:08:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:08:46 +0000   Tue, 28 Nov 2023 00:08:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:08:46 +0000   Tue, 28 Nov 2023 00:08:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    multinode-883509-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a88c66405d8044d3aa460988e7c5446e
	  System UUID:                a88c6640-5d80-44d3-aa46-0988e7c5446e
	  Boot ID:                    4813c5b7-040a-4b4c-8928-bbc37f7efde1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wqsbj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-t4wlq               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-proxy-fvsj6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 14m                    kube-proxy  
	  Normal   Starting                 104s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  14m (x5 over 14m)      kubelet     Node multinode-883509-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x5 over 14m)      kubelet     Node multinode-883509-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x5 over 14m)      kubelet     Node multinode-883509-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                14m                    kubelet     Node multinode-883509-m02 status is now: NodeReady
	  Normal   NodeNotReady             4m25s                  kubelet     Node multinode-883509-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m47s (x3 over 4m47s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 106s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)    kubelet     Node multinode-883509-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)    kubelet     Node multinode-883509-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)    kubelet     Node multinode-883509-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                   kubelet     Node multinode-883509-m02 status is now: NodeReady
	
	
	Name:               multinode-883509-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-883509-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:10:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-883509-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:10:27 +0000   Tue, 28 Nov 2023 00:10:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:10:27 +0000   Tue, 28 Nov 2023 00:10:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:10:27 +0000   Tue, 28 Nov 2023 00:10:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:10:27 +0000   Tue, 28 Nov 2023 00:10:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    multinode-883509-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 38f8680c4adc4e7b9c48418827dde507
	  System UUID:                38f8680c-4adc-4e7b-9c48-418827dde507
	  Boot ID:                    46628ed0-a7cb-46a3-ab76-73892a587781
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-6q5sf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-xtnn9               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-6dvv4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 13m                kube-proxy  
	  Normal   Starting                 13m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)  kubelet     Node multinode-883509-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)  kubelet     Node multinode-883509-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)  kubelet     Node multinode-883509-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                13m                kubelet     Node multinode-883509-m03 status is now: NodeReady
	  Normal   Starting                 13m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)  kubelet     Node multinode-883509-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet     Node multinode-883509-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)  kubelet     Node multinode-883509-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                kubelet     Node multinode-883509-m03 status is now: NodeReady
	  Normal   NodeNotReady             70s                kubelet     Node multinode-883509-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        12s (x2 over 72s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-883509-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-883509-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-883509-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-883509-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Nov28 00:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067528] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.338522] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.372772] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.130396] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.458472] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.400451] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.117826] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.144477] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.098586] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.208313] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[Nov28 00:05] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[ +18.719104] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [5bff0e07a92a879ae5ae63728a407a6b2ea9ce7faab6a90e8981c46e4a787fbe] <==
	* {"level":"info","ts":"2023-11-28T00:05:13.887997Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T00:05:13.888543Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T00:05:13.888571Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T00:05:13.888285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af switched to configuration voters=(17361235931841906351)"}
	{"level":"info","ts":"2023-11-28T00:05:13.888701Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","added-peer-id":"f0ef8018a32f46af","added-peer-peer-urls":["https://192.168.39.159:2380"]}
	{"level":"info","ts":"2023-11-28T00:05:13.888798Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bc02953927cca850","local-member-id":"f0ef8018a32f46af","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:05:13.888841Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:05:13.893735Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f0ef8018a32f46af","initial-advertise-peer-urls":["https://192.168.39.159:2380"],"listen-peer-urls":["https://192.168.39.159:2380"],"advertise-client-urls":["https://192.168.39.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T00:05:13.888333Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2023-11-28T00:05:13.896085Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T00:05:13.896992Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.159:2380"}
	{"level":"info","ts":"2023-11-28T00:05:14.872968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-28T00:05:14.873288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-28T00:05:14.873388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgPreVoteResp from f0ef8018a32f46af at term 2"}
	{"level":"info","ts":"2023-11-28T00:05:14.873429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became candidate at term 3"}
	{"level":"info","ts":"2023-11-28T00:05:14.873456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af received MsgVoteResp from f0ef8018a32f46af at term 3"}
	{"level":"info","ts":"2023-11-28T00:05:14.873485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0ef8018a32f46af became leader at term 3"}
	{"level":"info","ts":"2023-11-28T00:05:14.873515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0ef8018a32f46af elected leader f0ef8018a32f46af at term 3"}
	{"level":"info","ts":"2023-11-28T00:05:14.876326Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f0ef8018a32f46af","local-member-attributes":"{Name:multinode-883509 ClientURLs:[https://192.168.39.159:2379]}","request-path":"/0/members/f0ef8018a32f46af/attributes","cluster-id":"bc02953927cca850","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T00:05:14.876349Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:05:14.876669Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T00:05:14.876711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T00:05:14.876371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:05:14.877723Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T00:05:14.877737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.159:2379"}
	
	* 
	* ==> kernel <==
	*  00:10:33 up 5 min,  0 users,  load average: 0.18, 0.21, 0.12
	Linux multinode-883509 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [7db1d8dbd9a4d229badfe5f020c196b8eb292bf250711fe0a656073da8975787] <==
	* I1128 00:10:02.005711       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1128 00:10:02.006266       1 main.go:227] handling current node
	I1128 00:10:02.006317       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I1128 00:10:02.006330       1 main.go:250] Node multinode-883509-m02 has CIDR [10.244.1.0/24] 
	I1128 00:10:02.006491       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I1128 00:10:02.006526       1 main.go:250] Node multinode-883509-m03 has CIDR [10.244.3.0/24] 
	I1128 00:10:12.013652       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1128 00:10:12.013721       1 main.go:227] handling current node
	I1128 00:10:12.013752       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I1128 00:10:12.013763       1 main.go:250] Node multinode-883509-m02 has CIDR [10.244.1.0/24] 
	I1128 00:10:12.013994       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I1128 00:10:12.014038       1 main.go:250] Node multinode-883509-m03 has CIDR [10.244.3.0/24] 
	I1128 00:10:22.024865       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1128 00:10:22.025050       1 main.go:227] handling current node
	I1128 00:10:22.025088       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I1128 00:10:22.025110       1 main.go:250] Node multinode-883509-m02 has CIDR [10.244.1.0/24] 
	I1128 00:10:22.025315       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I1128 00:10:22.025354       1 main.go:250] Node multinode-883509-m03 has CIDR [10.244.3.0/24] 
	I1128 00:10:32.041137       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I1128 00:10:32.041438       1 main.go:227] handling current node
	I1128 00:10:32.041484       1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
	I1128 00:10:32.041517       1 main.go:250] Node multinode-883509-m02 has CIDR [10.244.1.0/24] 
	I1128 00:10:32.041667       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I1128 00:10:32.041698       1 main.go:250] Node multinode-883509-m03 has CIDR [10.244.2.0/24] 
	I1128 00:10:32.041757       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.128 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [59c66474e913f0f2eb58549efe677e090aa78caf50bbfa494d194d9127d79111] <==
	* I1128 00:05:16.252718       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1128 00:05:16.252787       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1128 00:05:16.252870       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1128 00:05:16.349115       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1128 00:05:16.349579       1 aggregator.go:166] initial CRD sync complete...
	I1128 00:05:16.349619       1 autoregister_controller.go:141] Starting autoregister controller
	I1128 00:05:16.349642       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1128 00:05:16.370641       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 00:05:16.381796       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1128 00:05:16.445782       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1128 00:05:16.445834       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1128 00:05:16.446727       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1128 00:05:16.447077       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1128 00:05:16.450625       1 shared_informer.go:318] Caches are synced for configmaps
	I1128 00:05:16.450702       1 cache.go:39] Caches are synced for autoregister controller
	I1128 00:05:16.450801       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1128 00:05:16.466707       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1128 00:05:17.254411       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1128 00:05:19.015356       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1128 00:05:19.173152       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1128 00:05:19.187517       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1128 00:05:19.263398       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 00:05:19.272472       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1128 00:05:28.883869       1 controller.go:624] quota admission added evaluator for: endpoints
	I1128 00:05:28.898831       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [bf80b1508f45da73c41961a9893c179b7570326e477a46dbddf16f195714da65] <==
	* I1128 00:08:46.448128       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-883509-m03"
	I1128 00:08:46.448376       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-883509-m02\" does not exist"
	I1128 00:08:46.448855       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-lgwvm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-lgwvm"
	I1128 00:08:46.465848       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-883509-m02" podCIDRs=["10.244.1.0/24"]
	I1128 00:08:46.587534       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-883509-m02"
	I1128 00:08:46.699091       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.929097ms"
	I1128 00:08:46.699243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="119.077µs"
	I1128 00:08:47.365000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="115.105µs"
	I1128 00:09:00.621086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.421µs"
	I1128 00:09:01.197759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="140.224µs"
	I1128 00:09:01.207780       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="81.383µs"
	I1128 00:09:22.280856       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-883509-m02"
	I1128 00:10:24.058595       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wqsbj"
	I1128 00:10:24.080542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.50269ms"
	I1128 00:10:24.103055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.476852ms"
	I1128 00:10:24.103246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="114.045µs"
	I1128 00:10:25.470568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.190936ms"
	I1128 00:10:25.470873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="130.325µs"
	I1128 00:10:27.070668       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-883509-m02"
	I1128 00:10:27.774947       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-883509-m02"
	I1128 00:10:27.775364       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-6q5sf" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-6q5sf"
	I1128 00:10:27.775442       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-883509-m03\" does not exist"
	I1128 00:10:27.801541       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-883509-m03" podCIDRs=["10.244.2.0/24"]
	I1128 00:10:27.909409       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-883509-m02"
	I1128 00:10:28.691390       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.109µs"
	
	* 
	* ==> kube-proxy [1a1f086fd643f010eaedbf56d3fb2c51c5bddf6c4cba72343f3d9cf5d343f34e] <==
	* I1128 00:05:17.774263       1 server_others.go:69] "Using iptables proxy"
	I1128 00:05:17.784841       1 node.go:141] Successfully retrieved node IP: 192.168.39.159
	I1128 00:05:17.841106       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 00:05:17.841265       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 00:05:17.844407       1 server_others.go:152] "Using iptables Proxier"
	I1128 00:05:17.844477       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 00:05:17.844738       1 server.go:846] "Version info" version="v1.28.4"
	I1128 00:05:17.844799       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:05:17.846122       1 config.go:188] "Starting service config controller"
	I1128 00:05:17.846269       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 00:05:17.846317       1 config.go:97] "Starting endpoint slice config controller"
	I1128 00:05:17.846341       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 00:05:17.846761       1 config.go:315] "Starting node config controller"
	I1128 00:05:17.846796       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 00:05:17.947359       1 shared_informer.go:318] Caches are synced for node config
	I1128 00:05:17.947512       1 shared_informer.go:318] Caches are synced for service config
	I1128 00:05:17.947535       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [0278b2c2e8413d4e2144f2a31bc0268fa5296cd15be586194b49125bd9c36aec] <==
	* I1128 00:05:14.087527       1 serving.go:348] Generated self-signed cert in-memory
	W1128 00:05:16.341737       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 00:05:16.341791       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:05:16.341807       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 00:05:16.341907       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 00:05:16.378110       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1128 00:05:16.381006       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:05:16.386616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1128 00:05:16.386738       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 00:05:16.386775       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 00:05:16.386790       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 00:05:16.487438       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:04:44 UTC, ends at Tue 2023-11-28 00:10:33 UTC. --
	Nov 28 00:05:24 multinode-883509 kubelet[918]: E1128 00:05:24.367790     918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d66953d-2cb8-45f7-a90b-c03b40f3fa0e-kube-api-access-69pvr podName:1d66953d-2cb8-45f7-a90b-c03b40f3fa0e nodeName:}" failed. No retries permitted until 2023-11-28 00:05:32.367769282 +0000 UTC m=+22.085232347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-69pvr" (UniqueName: "kubernetes.io/projected/1d66953d-2cb8-45f7-a90b-c03b40f3fa0e-kube-api-access-69pvr") pod "busybox-5bc68d56bd-9qz8x" (UID: "1d66953d-2cb8-45f7-a90b-c03b40f3fa0e") : object "default"/"kube-root-ca.crt" not registered
	Nov 28 00:05:24 multinode-883509 kubelet[918]: E1128 00:05:24.582687     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-9vws5" podUID="66ac3c18-9997-49aa-a154-ade69c138f12"
	Nov 28 00:05:24 multinode-883509 kubelet[918]: E1128 00:05:24.584008     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-9qz8x" podUID="1d66953d-2cb8-45f7-a90b-c03b40f3fa0e"
	Nov 28 00:05:47 multinode-883509 kubelet[918]: I1128 00:05:47.759023     918 scope.go:117] "RemoveContainer" containerID="93e2d43cb5432553149cd46161344c6929a552d48a057e3e61b2703177a22fe4"
	Nov 28 00:06:10 multinode-883509 kubelet[918]: E1128 00:06:10.712253     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:06:10 multinode-883509 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:06:10 multinode-883509 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:06:10 multinode-883509 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:07:10 multinode-883509 kubelet[918]: E1128 00:07:10.715050     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:07:10 multinode-883509 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:07:10 multinode-883509 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:07:10 multinode-883509 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:08:10 multinode-883509 kubelet[918]: E1128 00:08:10.715423     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:08:10 multinode-883509 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:08:10 multinode-883509 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:08:10 multinode-883509 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:09:10 multinode-883509 kubelet[918]: E1128 00:09:10.714642     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:09:10 multinode-883509 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:09:10 multinode-883509 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:09:10 multinode-883509 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:10:10 multinode-883509 kubelet[918]: E1128 00:10:10.567452     918 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Nov 28 00:10:10 multinode-883509 kubelet[918]: E1128 00:10:10.714578     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:10:10 multinode-883509 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:10:10 multinode-883509 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:10:10 multinode-883509 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-883509 -n multinode-883509
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-883509 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (780.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 stop
E1128 00:11:55.432669   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-883509 stop: exit status 82 (2m1.607907581s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-883509"  ...
	* Stopping node "multinode-883509"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-883509 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-883509 status: exit status 3 (18.811182182s)

                                                
                                                
-- stdout --
	multinode-883509
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-883509-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:12:56.297096   31047 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	E1128 00:12:56.297128   31047 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-883509 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-883509 -n multinode-883509
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-883509 -n multinode-883509: exit status 3 (3.165920601s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:12:59.625109   31147 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	E1128 00:12:59.625132   31147 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-883509" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.59s)

                                                
                                    
x
+
TestPreload (254.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-280327 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1128 00:21:54.034123   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 00:21:55.432860   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:23:50.988202   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-280327 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m36.142545992s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-280327 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-280327 image pull gcr.io/k8s-minikube/busybox: (2.794081651s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-280327
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-280327: (7.099549681s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-280327 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1128 00:24:58.477888   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:25:27.680748   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-280327 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m25.095524485s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-280327 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2023-11-28 00:25:28.793508116 +0000 UTC m=+3630.358534756
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-280327 -n test-preload-280327
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-280327 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-280327 logs -n 25: (1.138973153s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:56 UTC |
	|         | multinode-883509-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n multinode-883509 sudo cat                                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:56 UTC | 27 Nov 23 23:57 UTC |
	|         | /home/docker/cp-test_multinode-883509-m03_multinode-883509.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-883509 cp multinode-883509-m03:/home/docker/cp-test.txt                       | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	|         | multinode-883509-m02:/home/docker/cp-test_multinode-883509-m03_multinode-883509-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n                                                                 | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	|         | multinode-883509-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-883509 ssh -n multinode-883509-m02 sudo cat                                   | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	|         | /home/docker/cp-test_multinode-883509-m03_multinode-883509-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-883509 node stop m03                                                          | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	| node    | multinode-883509 node start                                                             | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC | 27 Nov 23 23:57 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-883509                                                                | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC |                     |
	| stop    | -p multinode-883509                                                                     | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:57 UTC |                     |
	| start   | -p multinode-883509                                                                     | multinode-883509     | jenkins | v1.32.0 | 27 Nov 23 23:59 UTC | 28 Nov 23 00:10 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-883509                                                                | multinode-883509     | jenkins | v1.32.0 | 28 Nov 23 00:10 UTC |                     |
	| node    | multinode-883509 node delete                                                            | multinode-883509     | jenkins | v1.32.0 | 28 Nov 23 00:10 UTC | 28 Nov 23 00:10 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-883509 stop                                                                   | multinode-883509     | jenkins | v1.32.0 | 28 Nov 23 00:10 UTC |                     |
	| start   | -p multinode-883509                                                                     | multinode-883509     | jenkins | v1.32.0 | 28 Nov 23 00:12 UTC | 28 Nov 23 00:20 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-883509                                                                | multinode-883509     | jenkins | v1.32.0 | 28 Nov 23 00:20 UTC |                     |
	| start   | -p multinode-883509-m02                                                                 | multinode-883509-m02 | jenkins | v1.32.0 | 28 Nov 23 00:20 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-883509-m03                                                                 | multinode-883509-m03 | jenkins | v1.32.0 | 28 Nov 23 00:20 UTC | 28 Nov 23 00:21 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-883509                                                                 | multinode-883509     | jenkins | v1.32.0 | 28 Nov 23 00:21 UTC |                     |
	| delete  | -p multinode-883509-m03                                                                 | multinode-883509-m03 | jenkins | v1.32.0 | 28 Nov 23 00:21 UTC | 28 Nov 23 00:21 UTC |
	| delete  | -p multinode-883509                                                                     | multinode-883509     | jenkins | v1.32.0 | 28 Nov 23 00:21 UTC | 28 Nov 23 00:21 UTC |
	| start   | -p test-preload-280327                                                                  | test-preload-280327  | jenkins | v1.32.0 | 28 Nov 23 00:21 UTC | 28 Nov 23 00:23 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-280327 image pull                                                          | test-preload-280327  | jenkins | v1.32.0 | 28 Nov 23 00:23 UTC | 28 Nov 23 00:23 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-280327                                                                  | test-preload-280327  | jenkins | v1.32.0 | 28 Nov 23 00:23 UTC | 28 Nov 23 00:24 UTC |
	| start   | -p test-preload-280327                                                                  | test-preload-280327  | jenkins | v1.32.0 | 28 Nov 23 00:24 UTC | 28 Nov 23 00:25 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-280327 image list                                                          | test-preload-280327  | jenkins | v1.32.0 | 28 Nov 23 00:25 UTC | 28 Nov 23 00:25 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 00:24:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 00:24:03.516530   34413 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:24:03.516792   34413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:24:03.516803   34413 out.go:309] Setting ErrFile to fd 2...
	I1128 00:24:03.516809   34413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:24:03.517028   34413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:24:03.517542   34413 out.go:303] Setting JSON to false
	I1128 00:24:03.518405   34413 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3991,"bootTime":1701127053,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:24:03.518463   34413 start.go:138] virtualization: kvm guest
	I1128 00:24:03.520824   34413 out.go:177] * [test-preload-280327] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:24:03.522643   34413 notify.go:220] Checking for updates...
	I1128 00:24:03.522663   34413 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:24:03.524092   34413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:24:03.525669   34413 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:24:03.527103   34413 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:24:03.528514   34413 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:24:03.529891   34413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:24:03.531460   34413 config.go:182] Loaded profile config "test-preload-280327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1128 00:24:03.531832   34413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:24:03.531872   34413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:24:03.545978   34413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I1128 00:24:03.546334   34413 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:24:03.546821   34413 main.go:141] libmachine: Using API Version  1
	I1128 00:24:03.546848   34413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:24:03.547236   34413 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:24:03.547402   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:03.549075   34413 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 00:24:03.550295   34413 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:24:03.550606   34413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:24:03.550644   34413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:24:03.563893   34413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I1128 00:24:03.564252   34413 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:24:03.564656   34413 main.go:141] libmachine: Using API Version  1
	I1128 00:24:03.564681   34413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:24:03.565000   34413 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:24:03.565148   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:03.598140   34413 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 00:24:03.599938   34413 start.go:298] selected driver: kvm2
	I1128 00:24:03.599955   34413 start.go:902] validating driver "kvm2" against &{Name:test-preload-280327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-280327 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:24:03.600050   34413 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:24:03.600826   34413 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:24:03.600895   34413 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 00:24:03.615241   34413 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 00:24:03.615598   34413 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 00:24:03.615677   34413 cni.go:84] Creating CNI manager for ""
	I1128 00:24:03.615696   34413 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:24:03.615714   34413 start_flags.go:323] config:
	{Name:test-preload-280327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-280327 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:24:03.615907   34413 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:24:03.618506   34413 out.go:177] * Starting control plane node test-preload-280327 in cluster test-preload-280327
	I1128 00:24:03.619638   34413 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1128 00:24:04.085776   34413 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1128 00:24:04.085832   34413 cache.go:56] Caching tarball of preloaded images
	I1128 00:24:04.086023   34413 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1128 00:24:04.087904   34413 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1128 00:24:04.089169   34413 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1128 00:24:04.201907   34413 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1128 00:24:19.222984   34413 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1128 00:24:19.224028   34413 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1128 00:24:20.121766   34413 cache.go:59] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I1128 00:24:20.121960   34413 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/config.json ...
	I1128 00:24:20.122240   34413 start.go:365] acquiring machines lock for test-preload-280327: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:24:20.122329   34413 start.go:369] acquired machines lock for "test-preload-280327" in 56.489µs
	I1128 00:24:20.122353   34413 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:24:20.122366   34413 fix.go:54] fixHost starting: 
	I1128 00:24:20.122673   34413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:24:20.122725   34413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:24:20.136835   34413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I1128 00:24:20.137267   34413 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:24:20.137729   34413 main.go:141] libmachine: Using API Version  1
	I1128 00:24:20.137755   34413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:24:20.138090   34413 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:24:20.138287   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:20.138485   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetState
	I1128 00:24:20.140002   34413 fix.go:102] recreateIfNeeded on test-preload-280327: state=Stopped err=<nil>
	I1128 00:24:20.140036   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	W1128 00:24:20.140207   34413 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:24:20.143324   34413 out.go:177] * Restarting existing kvm2 VM for "test-preload-280327" ...
	I1128 00:24:20.144995   34413 main.go:141] libmachine: (test-preload-280327) Calling .Start
	I1128 00:24:20.145177   34413 main.go:141] libmachine: (test-preload-280327) Ensuring networks are active...
	I1128 00:24:20.145801   34413 main.go:141] libmachine: (test-preload-280327) Ensuring network default is active
	I1128 00:24:20.146174   34413 main.go:141] libmachine: (test-preload-280327) Ensuring network mk-test-preload-280327 is active
	I1128 00:24:20.146525   34413 main.go:141] libmachine: (test-preload-280327) Getting domain xml...
	I1128 00:24:20.147189   34413 main.go:141] libmachine: (test-preload-280327) Creating domain...
	I1128 00:24:21.354002   34413 main.go:141] libmachine: (test-preload-280327) Waiting to get IP...
	I1128 00:24:21.354901   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:21.355277   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:21.355362   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:21.355266   34481 retry.go:31] will retry after 266.980854ms: waiting for machine to come up
	I1128 00:24:21.623646   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:21.624023   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:21.624053   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:21.623967   34481 retry.go:31] will retry after 319.237912ms: waiting for machine to come up
	I1128 00:24:21.944495   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:21.944950   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:21.944987   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:21.944920   34481 retry.go:31] will retry after 453.987071ms: waiting for machine to come up
	I1128 00:24:22.400624   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:22.401065   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:22.401097   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:22.401026   34481 retry.go:31] will retry after 564.080293ms: waiting for machine to come up
	I1128 00:24:22.966675   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:22.967137   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:22.967165   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:22.967090   34481 retry.go:31] will retry after 490.383414ms: waiting for machine to come up
	I1128 00:24:23.458706   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:23.459217   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:23.459253   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:23.459157   34481 retry.go:31] will retry after 735.902616ms: waiting for machine to come up
	I1128 00:24:24.197149   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:24.197505   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:24.197533   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:24.197447   34481 retry.go:31] will retry after 765.946068ms: waiting for machine to come up
	I1128 00:24:24.965366   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:24.965760   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:24.965791   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:24.965704   34481 retry.go:31] will retry after 1.330432631s: waiting for machine to come up
	I1128 00:24:26.297500   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:26.297888   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:26.297920   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:26.297838   34481 retry.go:31] will retry after 1.573023659s: waiting for machine to come up
	I1128 00:24:27.872627   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:27.873024   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:27.873046   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:27.872987   34481 retry.go:31] will retry after 2.101215183s: waiting for machine to come up
	I1128 00:24:29.975719   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:29.976321   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:29.976356   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:29.976242   34481 retry.go:31] will retry after 2.367182712s: waiting for machine to come up
	I1128 00:24:32.346238   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:32.346639   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:32.346667   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:32.346575   34481 retry.go:31] will retry after 3.374328876s: waiting for machine to come up
	I1128 00:24:35.723024   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:35.723395   34413 main.go:141] libmachine: (test-preload-280327) DBG | unable to find current IP address of domain test-preload-280327 in network mk-test-preload-280327
	I1128 00:24:35.723429   34413 main.go:141] libmachine: (test-preload-280327) DBG | I1128 00:24:35.723327   34481 retry.go:31] will retry after 2.734808889s: waiting for machine to come up
	I1128 00:24:38.461220   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.461519   34413 main.go:141] libmachine: (test-preload-280327) Found IP for machine: 192.168.39.42
	I1128 00:24:38.461543   34413 main.go:141] libmachine: (test-preload-280327) Reserving static IP address...
	I1128 00:24:38.461562   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has current primary IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.461985   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "test-preload-280327", mac: "52:54:00:6a:53:6c", ip: "192.168.39.42"} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:38.462030   34413 main.go:141] libmachine: (test-preload-280327) Reserved static IP address: 192.168.39.42
	I1128 00:24:38.462051   34413 main.go:141] libmachine: (test-preload-280327) DBG | skip adding static IP to network mk-test-preload-280327 - found existing host DHCP lease matching {name: "test-preload-280327", mac: "52:54:00:6a:53:6c", ip: "192.168.39.42"}
	I1128 00:24:38.462073   34413 main.go:141] libmachine: (test-preload-280327) DBG | Getting to WaitForSSH function...
	I1128 00:24:38.462099   34413 main.go:141] libmachine: (test-preload-280327) Waiting for SSH to be available...
	I1128 00:24:38.463906   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.464248   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:38.464299   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.464363   34413 main.go:141] libmachine: (test-preload-280327) DBG | Using SSH client type: external
	I1128 00:24:38.464403   34413 main.go:141] libmachine: (test-preload-280327) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/test-preload-280327/id_rsa (-rw-------)
	I1128 00:24:38.464448   34413 main.go:141] libmachine: (test-preload-280327) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/test-preload-280327/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:24:38.464462   34413 main.go:141] libmachine: (test-preload-280327) DBG | About to run SSH command:
	I1128 00:24:38.464488   34413 main.go:141] libmachine: (test-preload-280327) DBG | exit 0
	I1128 00:24:38.560616   34413 main.go:141] libmachine: (test-preload-280327) DBG | SSH cmd err, output: <nil>: 
	I1128 00:24:38.560936   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetConfigRaw
	I1128 00:24:38.561563   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetIP
	I1128 00:24:38.564176   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.564473   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:38.564498   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.564739   34413 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/config.json ...
	I1128 00:24:38.564947   34413 machine.go:88] provisioning docker machine ...
	I1128 00:24:38.564966   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:38.565184   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetMachineName
	I1128 00:24:38.565356   34413 buildroot.go:166] provisioning hostname "test-preload-280327"
	I1128 00:24:38.565370   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetMachineName
	I1128 00:24:38.565638   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:38.567847   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.568181   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:38.568226   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.568262   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:24:38.568426   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:38.568561   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:38.568716   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:24:38.568870   34413 main.go:141] libmachine: Using SSH client type: native
	I1128 00:24:38.569249   34413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 00:24:38.569264   34413 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-280327 && echo "test-preload-280327" | sudo tee /etc/hostname
	I1128 00:24:38.712940   34413 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-280327
	
	I1128 00:24:38.712977   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:38.715634   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.716002   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:38.716032   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.716162   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:24:38.716358   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:38.716517   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:38.716705   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:24:38.716883   34413 main.go:141] libmachine: Using SSH client type: native
	I1128 00:24:38.717191   34413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 00:24:38.717209   34413 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-280327' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-280327/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-280327' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:24:38.857522   34413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:24:38.857555   34413 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:24:38.857580   34413 buildroot.go:174] setting up certificates
	I1128 00:24:38.857592   34413 provision.go:83] configureAuth start
	I1128 00:24:38.857609   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetMachineName
	I1128 00:24:38.857890   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetIP
	I1128 00:24:38.860741   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.861106   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:38.861132   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.861267   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:38.863655   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.863979   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:38.864002   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:38.864175   34413 provision.go:138] copyHostCerts
	I1128 00:24:38.864223   34413 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:24:38.864233   34413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:24:38.864296   34413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:24:38.864378   34413 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:24:38.864386   34413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:24:38.864411   34413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:24:38.864471   34413 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:24:38.864479   34413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:24:38.864503   34413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:24:38.864561   34413 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.test-preload-280327 san=[192.168.39.42 192.168.39.42 localhost 127.0.0.1 minikube test-preload-280327]
	I1128 00:24:39.221403   34413 provision.go:172] copyRemoteCerts
	I1128 00:24:39.221464   34413 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:24:39.221488   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:39.224376   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.224807   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:39.224837   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.225010   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:24:39.225261   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:39.225450   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:24:39.225624   34413 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/test-preload-280327/id_rsa Username:docker}
	I1128 00:24:39.322074   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:24:39.345028   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1128 00:24:39.369443   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:24:39.394597   34413 provision.go:86] duration metric: configureAuth took 536.990267ms
	I1128 00:24:39.394626   34413 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:24:39.394808   34413 config.go:182] Loaded profile config "test-preload-280327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1128 00:24:39.394890   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:39.397526   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.397832   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:39.397867   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.398024   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:24:39.398185   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:39.398321   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:39.398513   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:24:39.398657   34413 main.go:141] libmachine: Using SSH client type: native
	I1128 00:24:39.398990   34413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 00:24:39.399010   34413 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:24:39.712960   34413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:24:39.712989   34413 machine.go:91] provisioned docker machine in 1.148027192s
	I1128 00:24:39.713001   34413 start.go:300] post-start starting for "test-preload-280327" (driver="kvm2")
	I1128 00:24:39.713014   34413 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:24:39.713054   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:39.713351   34413 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:24:39.713377   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:39.715833   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.716242   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:39.716274   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.716432   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:24:39.716627   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:39.716808   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:24:39.716997   34413 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/test-preload-280327/id_rsa Username:docker}
	I1128 00:24:39.811572   34413 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:24:39.816276   34413 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:24:39.816301   34413 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:24:39.816370   34413 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:24:39.816461   34413 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:24:39.816568   34413 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:24:39.825253   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:24:39.848456   34413 start.go:303] post-start completed in 135.44082ms
	I1128 00:24:39.848479   34413 fix.go:56] fixHost completed within 19.726118245s
	I1128 00:24:39.848511   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:39.851046   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.851335   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:39.851365   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.851477   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:24:39.851690   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:39.851860   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:39.852042   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:24:39.852225   34413 main.go:141] libmachine: Using SSH client type: native
	I1128 00:24:39.852601   34413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 00:24:39.852615   34413 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:24:39.981720   34413 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701131079.931834783
	
	I1128 00:24:39.981743   34413 fix.go:206] guest clock: 1701131079.931834783
	I1128 00:24:39.981750   34413 fix.go:219] Guest: 2023-11-28 00:24:39.931834783 +0000 UTC Remote: 2023-11-28 00:24:39.848492934 +0000 UTC m=+36.381828619 (delta=83.341849ms)
	I1128 00:24:39.981767   34413 fix.go:190] guest clock delta is within tolerance: 83.341849ms
	I1128 00:24:39.981772   34413 start.go:83] releasing machines lock for "test-preload-280327", held for 19.859428734s
	I1128 00:24:39.981799   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:39.982041   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetIP
	I1128 00:24:39.984551   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.984912   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:39.984944   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.985101   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:39.985519   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:39.985705   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:24:39.985809   34413 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:24:39.985849   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:39.985908   34413 ssh_runner.go:195] Run: cat /version.json
	I1128 00:24:39.985928   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:24:39.988677   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.989070   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:39.989096   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.989118   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.989251   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:24:39.989471   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:39.989625   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:24:39.989656   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:39.989685   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:39.989750   34413 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/test-preload-280327/id_rsa Username:docker}
	I1128 00:24:39.989819   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:24:39.989983   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:24:39.990118   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:24:39.990234   34413 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/test-preload-280327/id_rsa Username:docker}
	I1128 00:24:40.101292   34413 ssh_runner.go:195] Run: systemctl --version
	I1128 00:24:40.106953   34413 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:24:40.246365   34413 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:24:40.253359   34413 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:24:40.253434   34413 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:24:40.267465   34413 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:24:40.267487   34413 start.go:472] detecting cgroup driver to use...
	I1128 00:24:40.267541   34413 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:24:40.282298   34413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:24:40.293612   34413 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:24:40.293663   34413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:24:40.305639   34413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:24:40.317748   34413 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:24:40.420250   34413 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:24:40.538712   34413 docker.go:219] disabling docker service ...
	I1128 00:24:40.538801   34413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:24:40.552516   34413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:24:40.564337   34413 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:24:40.675844   34413 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:24:40.775984   34413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:24:40.789535   34413 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:24:40.806872   34413 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1128 00:24:40.806936   34413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:24:40.817567   34413 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:24:40.817642   34413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:24:40.828992   34413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:24:40.840221   34413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:24:40.851910   34413 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:24:40.863109   34413 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:24:40.872458   34413 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:24:40.872509   34413 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:24:40.885547   34413 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:24:40.895475   34413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:24:40.994474   34413 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:24:41.167463   34413 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:24:41.167562   34413 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:24:41.176409   34413 start.go:540] Will wait 60s for crictl version
	I1128 00:24:41.176488   34413 ssh_runner.go:195] Run: which crictl
	I1128 00:24:41.180323   34413 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:24:41.219183   34413 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:24:41.219302   34413 ssh_runner.go:195] Run: crio --version
	I1128 00:24:41.264796   34413 ssh_runner.go:195] Run: crio --version
	I1128 00:24:41.314340   34413 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I1128 00:24:41.315747   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetIP
	I1128 00:24:41.318108   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:41.318424   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:24:41.318531   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:24:41.318633   34413 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:24:41.322835   34413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:24:41.335705   34413 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1128 00:24:41.335775   34413 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:24:41.374086   34413 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1128 00:24:41.374157   34413 ssh_runner.go:195] Run: which lz4
	I1128 00:24:41.377844   34413 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:24:41.381643   34413 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:24:41.381673   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1128 00:24:43.163610   34413 crio.go:444] Took 1.785799 seconds to copy over tarball
	I1128 00:24:43.163696   34413 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:24:46.101471   34413 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.937740354s)
	I1128 00:24:46.101511   34413 crio.go:451] Took 2.937872 seconds to extract the tarball
	I1128 00:24:46.101524   34413 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:24:46.141816   34413 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:24:46.195557   34413 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1128 00:24:46.195581   34413 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:24:46.195672   34413 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:24:46.195717   34413 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1128 00:24:46.195740   34413 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1128 00:24:46.195684   34413 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1128 00:24:46.195855   34413 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1128 00:24:46.195894   34413 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1128 00:24:46.195684   34413 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1128 00:24:46.195846   34413 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1128 00:24:46.197025   34413 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1128 00:24:46.197039   34413 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:24:46.197041   34413 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1128 00:24:46.197054   34413 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1128 00:24:46.197056   34413 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1128 00:24:46.197024   34413 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1128 00:24:46.197025   34413 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1128 00:24:46.197035   34413 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1128 00:24:46.322361   34413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1128 00:24:46.338594   34413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1128 00:24:46.343949   34413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1128 00:24:46.358742   34413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1128 00:24:46.362017   34413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1128 00:24:46.367960   34413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1128 00:24:46.385189   34413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1128 00:24:46.390894   34413 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1128 00:24:46.390936   34413 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1128 00:24:46.390984   34413 ssh_runner.go:195] Run: which crictl
	I1128 00:24:46.443749   34413 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1128 00:24:46.443797   34413 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1128 00:24:46.443850   34413 ssh_runner.go:195] Run: which crictl
	I1128 00:24:46.478121   34413 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1128 00:24:46.478173   34413 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1128 00:24:46.478221   34413 ssh_runner.go:195] Run: which crictl
	I1128 00:24:46.482650   34413 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1128 00:24:46.482693   34413 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1128 00:24:46.482744   34413 ssh_runner.go:195] Run: which crictl
	I1128 00:24:46.509305   34413 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1128 00:24:46.509355   34413 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1128 00:24:46.509400   34413 ssh_runner.go:195] Run: which crictl
	I1128 00:24:46.520739   34413 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1128 00:24:46.520792   34413 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1128 00:24:46.520833   34413 ssh_runner.go:195] Run: which crictl
	I1128 00:24:46.523525   34413 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1128 00:24:46.523565   34413 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1128 00:24:46.523597   34413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1128 00:24:46.523644   34413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1128 00:24:46.523605   34413 ssh_runner.go:195] Run: which crictl
	I1128 00:24:46.523688   34413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1128 00:24:46.523768   34413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1128 00:24:46.523798   34413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1128 00:24:46.527413   34413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1128 00:24:46.661555   34413 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1128 00:24:46.661599   34413 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1128 00:24:46.661669   34413 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1128 00:24:46.661720   34413 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1128 00:24:46.661737   34413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1128 00:24:46.661669   34413 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1128 00:24:46.661814   34413 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1128 00:24:46.665245   34413 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1128 00:24:46.665309   34413 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1128 00:24:46.665349   34413 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1128 00:24:46.665377   34413 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1128 00:24:46.665401   34413 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1128 00:24:46.665456   34413 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1128 00:24:46.712037   34413 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1128 00:24:46.712070   34413 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1128 00:24:46.712111   34413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1128 00:24:46.712143   34413 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1128 00:24:46.712189   34413 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1128 00:24:46.712235   34413 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1128 00:24:46.712245   34413 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1128 00:24:46.712286   34413 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1128 00:24:46.712394   34413 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1128 00:24:46.712416   34413 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1128 00:24:47.128427   34413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:24:48.964509   34413 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.252366886s)
	I1128 00:24:48.964555   34413 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1128 00:24:48.964576   34413 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.836115994s)
	I1128 00:24:48.964580   34413 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1128 00:24:48.964524   34413 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (2.252260703s)
	I1128 00:24:48.964679   34413 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1128 00:24:48.964682   34413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1128 00:24:49.407229   34413 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1128 00:24:49.407279   34413 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1128 00:24:49.407334   34413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1128 00:24:50.149609   34413 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1128 00:24:50.149644   34413 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I1128 00:24:50.149689   34413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1128 00:24:50.294278   34413 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1128 00:24:50.294322   34413 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1128 00:24:50.294388   34413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1128 00:24:51.142759   34413 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1128 00:24:51.142812   34413 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1128 00:24:51.142863   34413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1128 00:24:51.583103   34413 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1128 00:24:51.583155   34413 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1128 00:24:51.583208   34413 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1128 00:24:53.738151   34413 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.154919525s)
	I1128 00:24:53.738179   34413 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1128 00:24:53.738211   34413 cache_images.go:123] Successfully loaded all cached images
	I1128 00:24:53.738219   34413 cache_images.go:92] LoadImages completed in 7.542628423s
	I1128 00:24:53.738281   34413 ssh_runner.go:195] Run: crio config
	I1128 00:24:53.795487   34413 cni.go:84] Creating CNI manager for ""
	I1128 00:24:53.795511   34413 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:24:53.795534   34413 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:24:53.795553   34413 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.42 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-280327 NodeName:test-preload-280327 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:24:53.795688   34413 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-280327"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:24:53.795747   34413 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-280327 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-280327 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:24:53.795798   34413 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1128 00:24:53.805474   34413 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:24:53.805561   34413 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:24:53.814551   34413 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 00:24:53.830500   34413 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:24:53.846361   34413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1128 00:24:53.862842   34413 ssh_runner.go:195] Run: grep 192.168.39.42	control-plane.minikube.internal$ /etc/hosts
	I1128 00:24:53.866589   34413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:24:53.877915   34413 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327 for IP: 192.168.39.42
	I1128 00:24:53.877946   34413 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:24:53.878099   34413 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:24:53.878170   34413 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:24:53.878271   34413 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/client.key
	I1128 00:24:53.878350   34413 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/apiserver.key.95c56caa
	I1128 00:24:53.878413   34413 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/proxy-client.key
	I1128 00:24:53.878552   34413 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:24:53.878599   34413 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:24:53.878614   34413 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:24:53.878650   34413 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:24:53.878693   34413 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:24:53.878724   34413 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:24:53.878777   34413 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:24:53.879635   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:24:53.900986   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 00:24:53.923515   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:24:53.945455   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:24:53.967458   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:24:53.990717   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:24:54.015526   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:24:54.038263   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:24:54.060103   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:24:54.083024   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:24:54.105057   34413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:24:54.126409   34413 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:24:54.142450   34413 ssh_runner.go:195] Run: openssl version
	I1128 00:24:54.147791   34413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:24:54.158067   34413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:24:54.162472   34413 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:24:54.162518   34413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:24:54.167748   34413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:24:54.177588   34413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:24:54.187382   34413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:24:54.191826   34413 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:24:54.191896   34413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:24:54.197170   34413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:24:54.207205   34413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:24:54.217351   34413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:24:54.221916   34413 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:24:54.221978   34413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:24:54.227575   34413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:24:54.238193   34413 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:24:54.242665   34413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:24:54.248397   34413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:24:54.253994   34413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:24:54.259543   34413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:24:54.265379   34413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:24:54.270786   34413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:24:54.276078   34413 kubeadm.go:404] StartCluster: {Name:test-preload-280327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-280327 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:24:54.276182   34413 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:24:54.276244   34413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:24:54.313681   34413 cri.go:89] found id: ""
	I1128 00:24:54.313739   34413 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:24:54.323735   34413 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:24:54.323783   34413 kubeadm.go:636] restartCluster start
	I1128 00:24:54.323825   34413 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:24:54.332730   34413 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:54.333126   34413 kubeconfig.go:135] verify returned: extract IP: "test-preload-280327" does not appear in /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:24:54.333218   34413 kubeconfig.go:146] "test-preload-280327" context is missing from /home/jenkins/minikube-integration/17206-4749/kubeconfig - will repair!
	I1128 00:24:54.333479   34413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:24:54.334039   34413 kapi.go:59] client config for test-preload-280327: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:24:54.334690   34413 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:24:54.342922   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:54.342968   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:54.353785   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:54.353802   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:54.353829   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:54.364150   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:54.864783   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:54.864855   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:54.876834   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:55.364731   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:55.364817   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:55.376785   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:55.864309   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:55.864409   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:55.875842   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:56.365095   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:56.365182   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:56.378108   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:56.864579   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:56.864668   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:56.876189   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:57.364817   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:57.364917   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:57.377157   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:57.864695   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:57.864814   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:57.877757   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:58.364220   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:58.364344   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:58.376223   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:58.865133   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:58.865261   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:58.876911   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:59.364380   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:59.364450   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:59.376307   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:24:59.865055   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:24:59.865130   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:24:59.877015   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:00.365181   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:25:00.365274   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:25:00.377250   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:00.864687   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:25:00.864815   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:25:00.876555   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:01.365152   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:25:01.365250   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:25:01.376926   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:01.864457   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:25:01.864540   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:25:01.877408   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:02.365018   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:25:02.365112   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:25:02.376964   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:02.864477   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:25:02.864601   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:25:02.875374   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:03.364977   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:25:03.365076   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:25:03.376704   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:03.864810   34413 api_server.go:166] Checking apiserver status ...
	I1128 00:25:03.864891   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:25:03.876914   34413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:25:04.343918   34413 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:25:04.343949   34413 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:25:04.343972   34413 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:25:04.344030   34413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:25:04.390425   34413 cri.go:89] found id: ""
	I1128 00:25:04.390490   34413 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:25:04.406709   34413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:25:04.415950   34413 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:25:04.416010   34413 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:25:04.425537   34413 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:25:04.425572   34413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:25:04.539416   34413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:25:05.117156   34413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:25:05.485802   34413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:25:05.548407   34413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:25:05.620449   34413 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:25:05.620514   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:25:05.633945   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:25:06.149312   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:25:06.649488   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:25:07.149701   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:25:07.648907   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:25:08.149114   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:25:08.174650   34413 api_server.go:72] duration metric: took 2.554202134s to wait for apiserver process to appear ...
	I1128 00:25:08.174682   34413 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:25:08.174699   34413 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1128 00:25:13.050750   34413 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:25:13.050782   34413 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:25:13.050798   34413 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1128 00:25:13.103832   34413 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:25:13.103858   34413 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:25:13.604745   34413 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1128 00:25:13.610774   34413 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1128 00:25:13.610811   34413 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1128 00:25:14.104343   34413 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1128 00:25:14.110933   34413 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1128 00:25:14.110977   34413 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1128 00:25:14.604795   34413 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1128 00:25:14.611950   34413 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1128 00:25:14.620904   34413 api_server.go:141] control plane version: v1.24.4
	I1128 00:25:14.620937   34413 api_server.go:131] duration metric: took 6.446244024s to wait for apiserver health ...
	I1128 00:25:14.620949   34413 cni.go:84] Creating CNI manager for ""
	I1128 00:25:14.620958   34413 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:25:14.623005   34413 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:25:14.624918   34413 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:25:14.634572   34413 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:25:14.670533   34413 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:25:14.685785   34413 system_pods.go:59] 8 kube-system pods found
	I1128 00:25:14.685818   34413 system_pods.go:61] "coredns-6d4b75cb6d-m56l2" [6acc0e80-c6dc-4798-9777-0ebb1ae1e84f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:25:14.685827   34413 system_pods.go:61] "coredns-6d4b75cb6d-w6fgj" [84d548a8-d578-4d10-8273-fe94be98c5f8] Running
	I1128 00:25:14.685837   34413 system_pods.go:61] "etcd-test-preload-280327" [956f1118-b25b-451e-a1df-11f0c025ac45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:25:14.685848   34413 system_pods.go:61] "kube-apiserver-test-preload-280327" [53d50b0c-cf59-4c5a-bd71-d46a936aeaf2] Running
	I1128 00:25:14.685857   34413 system_pods.go:61] "kube-controller-manager-test-preload-280327" [53d8a392-e0a5-41de-8ad3-703b963d7a89] Running
	I1128 00:25:14.685865   34413 system_pods.go:61] "kube-proxy-7ld42" [3e2ac43e-c359-41c3-9bc6-8acdb338ae3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:25:14.685875   34413 system_pods.go:61] "kube-scheduler-test-preload-280327" [4b0d8e12-db1e-4f49-9464-6cfc34465732] Running
	I1128 00:25:14.685884   34413 system_pods.go:61] "storage-provisioner" [3dc47545-ed6a-4502-b893-23977ad84222] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:25:14.685894   34413 system_pods.go:74] duration metric: took 15.335079ms to wait for pod list to return data ...
	I1128 00:25:14.685901   34413 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:25:14.692208   34413 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:25:14.692247   34413 node_conditions.go:123] node cpu capacity is 2
	I1128 00:25:14.692261   34413 node_conditions.go:105] duration metric: took 6.355268ms to run NodePressure ...
	I1128 00:25:14.692283   34413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:25:14.934272   34413 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:25:14.941722   34413 kubeadm.go:787] kubelet initialised
	I1128 00:25:14.941745   34413 kubeadm.go:788] duration metric: took 7.445864ms waiting for restarted kubelet to initialise ...
	I1128 00:25:14.941751   34413 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:25:14.949624   34413 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-m56l2" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:14.955986   34413 pod_ready.go:97] node "test-preload-280327" hosting pod "coredns-6d4b75cb6d-m56l2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:14.956011   34413 pod_ready.go:81] duration metric: took 6.365046ms waiting for pod "coredns-6d4b75cb6d-m56l2" in "kube-system" namespace to be "Ready" ...
	E1128 00:25:14.956019   34413 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-280327" hosting pod "coredns-6d4b75cb6d-m56l2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:14.956026   34413 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-w6fgj" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:14.962283   34413 pod_ready.go:97] node "test-preload-280327" hosting pod "coredns-6d4b75cb6d-w6fgj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:14.962301   34413 pod_ready.go:81] duration metric: took 6.264076ms waiting for pod "coredns-6d4b75cb6d-w6fgj" in "kube-system" namespace to be "Ready" ...
	E1128 00:25:14.962309   34413 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-280327" hosting pod "coredns-6d4b75cb6d-w6fgj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:14.962313   34413 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:14.968192   34413 pod_ready.go:97] node "test-preload-280327" hosting pod "etcd-test-preload-280327" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:14.968219   34413 pod_ready.go:81] duration metric: took 5.897363ms waiting for pod "etcd-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	E1128 00:25:14.968231   34413 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-280327" hosting pod "etcd-test-preload-280327" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:14.968238   34413 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:15.075007   34413 pod_ready.go:97] node "test-preload-280327" hosting pod "kube-apiserver-test-preload-280327" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:15.075048   34413 pod_ready.go:81] duration metric: took 106.788605ms waiting for pod "kube-apiserver-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	E1128 00:25:15.075060   34413 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-280327" hosting pod "kube-apiserver-test-preload-280327" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:15.075070   34413 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:15.474843   34413 pod_ready.go:97] node "test-preload-280327" hosting pod "kube-controller-manager-test-preload-280327" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:15.474877   34413 pod_ready.go:81] duration metric: took 399.794274ms waiting for pod "kube-controller-manager-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	E1128 00:25:15.474891   34413 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-280327" hosting pod "kube-controller-manager-test-preload-280327" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:15.474899   34413 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7ld42" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:15.873671   34413 pod_ready.go:97] node "test-preload-280327" hosting pod "kube-proxy-7ld42" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:15.873703   34413 pod_ready.go:81] duration metric: took 398.792835ms waiting for pod "kube-proxy-7ld42" in "kube-system" namespace to be "Ready" ...
	E1128 00:25:15.873712   34413 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-280327" hosting pod "kube-proxy-7ld42" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:15.873720   34413 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:16.275259   34413 pod_ready.go:97] node "test-preload-280327" hosting pod "kube-scheduler-test-preload-280327" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:16.275283   34413 pod_ready.go:81] duration metric: took 401.555452ms waiting for pod "kube-scheduler-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	E1128 00:25:16.275293   34413 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-280327" hosting pod "kube-scheduler-test-preload-280327" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:16.275299   34413 pod_ready.go:38] duration metric: took 1.333537822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:25:16.275321   34413 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:25:16.287453   34413 ops.go:34] apiserver oom_adj: -16
	I1128 00:25:16.287487   34413 kubeadm.go:640] restartCluster took 21.963688451s
	I1128 00:25:16.287496   34413 kubeadm.go:406] StartCluster complete in 22.011423485s
	I1128 00:25:16.287516   34413 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:25:16.287606   34413 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:25:16.288278   34413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:25:16.288562   34413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:25:16.288654   34413 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:25:16.288735   34413 addons.go:69] Setting storage-provisioner=true in profile "test-preload-280327"
	I1128 00:25:16.288749   34413 addons.go:69] Setting default-storageclass=true in profile "test-preload-280327"
	I1128 00:25:16.288765   34413 addons.go:231] Setting addon storage-provisioner=true in "test-preload-280327"
	I1128 00:25:16.288765   34413 config.go:182] Loaded profile config "test-preload-280327": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	W1128 00:25:16.288777   34413 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:25:16.288778   34413 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-280327"
	I1128 00:25:16.288829   34413 host.go:66] Checking if "test-preload-280327" exists ...
	I1128 00:25:16.289150   34413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:25:16.289133   34413 kapi.go:59] client config for test-preload-280327: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:25:16.289197   34413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:25:16.289301   34413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:25:16.289344   34413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:25:16.292088   34413 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-280327" context rescaled to 1 replicas
	I1128 00:25:16.292132   34413 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:25:16.295286   34413 out.go:177] * Verifying Kubernetes components...
	I1128 00:25:16.296679   34413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:25:16.304368   34413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I1128 00:25:16.304695   34413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I1128 00:25:16.304773   34413 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:25:16.305040   34413 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:25:16.305243   34413 main.go:141] libmachine: Using API Version  1
	I1128 00:25:16.305262   34413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:25:16.305533   34413 main.go:141] libmachine: Using API Version  1
	I1128 00:25:16.305565   34413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:25:16.305663   34413 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:25:16.305847   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetState
	I1128 00:25:16.305870   34413 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:25:16.306281   34413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:25:16.306322   34413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:25:16.308353   34413 kapi.go:59] client config for test-preload-280327: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/profiles/test-preload-280327/client.key", CAFile:"/home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 00:25:16.308693   34413 addons.go:231] Setting addon default-storageclass=true in "test-preload-280327"
	W1128 00:25:16.308716   34413 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:25:16.308749   34413 host.go:66] Checking if "test-preload-280327" exists ...
	I1128 00:25:16.309189   34413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:25:16.309234   34413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:25:16.320761   34413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I1128 00:25:16.321213   34413 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:25:16.321760   34413 main.go:141] libmachine: Using API Version  1
	I1128 00:25:16.321786   34413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:25:16.322147   34413 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:25:16.322348   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetState
	I1128 00:25:16.322974   34413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I1128 00:25:16.323393   34413 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:25:16.323920   34413 main.go:141] libmachine: Using API Version  1
	I1128 00:25:16.323946   34413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:25:16.324218   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:25:16.324286   34413 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:25:16.326333   34413 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:25:16.324724   34413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:25:16.326387   34413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:25:16.327969   34413 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:25:16.327983   34413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:25:16.328005   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:25:16.331202   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:25:16.331641   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:25:16.331668   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:25:16.331864   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:25:16.332059   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:25:16.332238   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:25:16.332389   34413 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/test-preload-280327/id_rsa Username:docker}
	I1128 00:25:16.343850   34413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I1128 00:25:16.344263   34413 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:25:16.344727   34413 main.go:141] libmachine: Using API Version  1
	I1128 00:25:16.344747   34413 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:25:16.345095   34413 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:25:16.345247   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetState
	I1128 00:25:16.347018   34413 main.go:141] libmachine: (test-preload-280327) Calling .DriverName
	I1128 00:25:16.347265   34413 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:25:16.347286   34413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:25:16.347302   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHHostname
	I1128 00:25:16.349856   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:25:16.350246   34413 main.go:141] libmachine: (test-preload-280327) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:53:6c", ip: ""} in network mk-test-preload-280327: {Iface:virbr1 ExpiryTime:2023-11-28 01:24:32 +0000 UTC Type:0 Mac:52:54:00:6a:53:6c Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-280327 Clientid:01:52:54:00:6a:53:6c}
	I1128 00:25:16.350282   34413 main.go:141] libmachine: (test-preload-280327) DBG | domain test-preload-280327 has defined IP address 192.168.39.42 and MAC address 52:54:00:6a:53:6c in network mk-test-preload-280327
	I1128 00:25:16.350412   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHPort
	I1128 00:25:16.350589   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHKeyPath
	I1128 00:25:16.350731   34413 main.go:141] libmachine: (test-preload-280327) Calling .GetSSHUsername
	I1128 00:25:16.350867   34413 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/test-preload-280327/id_rsa Username:docker}
	I1128 00:25:16.458243   34413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:25:16.503665   34413 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 00:25:16.503715   34413 node_ready.go:35] waiting up to 6m0s for node "test-preload-280327" to be "Ready" ...
	I1128 00:25:16.507830   34413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:25:17.392664   34413 main.go:141] libmachine: Making call to close driver server
	I1128 00:25:17.392688   34413 main.go:141] libmachine: (test-preload-280327) Calling .Close
	I1128 00:25:17.392737   34413 main.go:141] libmachine: Making call to close driver server
	I1128 00:25:17.392769   34413 main.go:141] libmachine: (test-preload-280327) Calling .Close
	I1128 00:25:17.392980   34413 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:25:17.393056   34413 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:25:17.393072   34413 main.go:141] libmachine: Making call to close driver server
	I1128 00:25:17.393082   34413 main.go:141] libmachine: (test-preload-280327) Calling .Close
	I1128 00:25:17.392978   34413 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:25:17.393130   34413 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:25:17.393149   34413 main.go:141] libmachine: Making call to close driver server
	I1128 00:25:17.393164   34413 main.go:141] libmachine: (test-preload-280327) Calling .Close
	I1128 00:25:17.393021   34413 main.go:141] libmachine: (test-preload-280327) DBG | Closing plugin on server side
	I1128 00:25:17.393039   34413 main.go:141] libmachine: (test-preload-280327) DBG | Closing plugin on server side
	I1128 00:25:17.393283   34413 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:25:17.393301   34413 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:25:17.393319   34413 main.go:141] libmachine: (test-preload-280327) DBG | Closing plugin on server side
	I1128 00:25:17.393380   34413 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:25:17.393402   34413 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:25:17.393411   34413 main.go:141] libmachine: (test-preload-280327) DBG | Closing plugin on server side
	I1128 00:25:17.402317   34413 main.go:141] libmachine: Making call to close driver server
	I1128 00:25:17.402336   34413 main.go:141] libmachine: (test-preload-280327) Calling .Close
	I1128 00:25:17.402557   34413 main.go:141] libmachine: (test-preload-280327) DBG | Closing plugin on server side
	I1128 00:25:17.402590   34413 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:25:17.402608   34413 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:25:17.405629   34413 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1128 00:25:17.406980   34413 addons.go:502] enable addons completed in 1.11833594s: enabled=[storage-provisioner default-storageclass]
	I1128 00:25:18.679493   34413 node_ready.go:58] node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:20.679794   34413 node_ready.go:58] node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:22.680368   34413 node_ready.go:58] node "test-preload-280327" has status "Ready":"False"
	I1128 00:25:23.679807   34413 node_ready.go:49] node "test-preload-280327" has status "Ready":"True"
	I1128 00:25:23.679837   34413 node_ready.go:38] duration metric: took 7.176093258s waiting for node "test-preload-280327" to be "Ready" ...
	I1128 00:25:23.679853   34413 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:25:23.685709   34413 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-w6fgj" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:23.690955   34413 pod_ready.go:92] pod "coredns-6d4b75cb6d-w6fgj" in "kube-system" namespace has status "Ready":"True"
	I1128 00:25:23.690983   34413 pod_ready.go:81] duration metric: took 5.248488ms waiting for pod "coredns-6d4b75cb6d-w6fgj" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:23.690991   34413 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:25.731224   34413 pod_ready.go:102] pod "etcd-test-preload-280327" in "kube-system" namespace has status "Ready":"False"
	I1128 00:25:27.205583   34413 pod_ready.go:92] pod "etcd-test-preload-280327" in "kube-system" namespace has status "Ready":"True"
	I1128 00:25:27.205625   34413 pod_ready.go:81] duration metric: took 3.514625856s waiting for pod "etcd-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.205644   34413 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.210320   34413 pod_ready.go:92] pod "kube-apiserver-test-preload-280327" in "kube-system" namespace has status "Ready":"True"
	I1128 00:25:27.210337   34413 pod_ready.go:81] duration metric: took 4.683986ms waiting for pod "kube-apiserver-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.210345   34413 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.214721   34413 pod_ready.go:92] pod "kube-controller-manager-test-preload-280327" in "kube-system" namespace has status "Ready":"True"
	I1128 00:25:27.214741   34413 pod_ready.go:81] duration metric: took 4.389279ms waiting for pod "kube-controller-manager-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.214757   34413 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ld42" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.280028   34413 pod_ready.go:92] pod "kube-proxy-7ld42" in "kube-system" namespace has status "Ready":"True"
	I1128 00:25:27.280050   34413 pod_ready.go:81] duration metric: took 65.285292ms waiting for pod "kube-proxy-7ld42" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.280058   34413 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.680772   34413 pod_ready.go:92] pod "kube-scheduler-test-preload-280327" in "kube-system" namespace has status "Ready":"True"
	I1128 00:25:27.680910   34413 pod_ready.go:81] duration metric: took 400.836983ms waiting for pod "kube-scheduler-test-preload-280327" in "kube-system" namespace to be "Ready" ...
	I1128 00:25:27.680938   34413 pod_ready.go:38] duration metric: took 4.001074417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:25:27.680959   34413 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:25:27.681040   34413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:25:27.698265   34413 api_server.go:72] duration metric: took 11.406093088s to wait for apiserver process to appear ...
	I1128 00:25:27.698292   34413 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:25:27.698306   34413 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1128 00:25:27.704272   34413 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1128 00:25:27.705209   34413 api_server.go:141] control plane version: v1.24.4
	I1128 00:25:27.705229   34413 api_server.go:131] duration metric: took 6.932181ms to wait for apiserver health ...
	I1128 00:25:27.705236   34413 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:25:27.883458   34413 system_pods.go:59] 7 kube-system pods found
	I1128 00:25:27.883485   34413 system_pods.go:61] "coredns-6d4b75cb6d-w6fgj" [84d548a8-d578-4d10-8273-fe94be98c5f8] Running
	I1128 00:25:27.883492   34413 system_pods.go:61] "etcd-test-preload-280327" [956f1118-b25b-451e-a1df-11f0c025ac45] Running
	I1128 00:25:27.883497   34413 system_pods.go:61] "kube-apiserver-test-preload-280327" [53d50b0c-cf59-4c5a-bd71-d46a936aeaf2] Running
	I1128 00:25:27.883501   34413 system_pods.go:61] "kube-controller-manager-test-preload-280327" [53d8a392-e0a5-41de-8ad3-703b963d7a89] Running
	I1128 00:25:27.883505   34413 system_pods.go:61] "kube-proxy-7ld42" [3e2ac43e-c359-41c3-9bc6-8acdb338ae3b] Running
	I1128 00:25:27.883508   34413 system_pods.go:61] "kube-scheduler-test-preload-280327" [4b0d8e12-db1e-4f49-9464-6cfc34465732] Running
	I1128 00:25:27.883513   34413 system_pods.go:61] "storage-provisioner" [3dc47545-ed6a-4502-b893-23977ad84222] Running
	I1128 00:25:27.883518   34413 system_pods.go:74] duration metric: took 178.277263ms to wait for pod list to return data ...
	I1128 00:25:27.883524   34413 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:25:28.079763   34413 default_sa.go:45] found service account: "default"
	I1128 00:25:28.079790   34413 default_sa.go:55] duration metric: took 196.259926ms for default service account to be created ...
	I1128 00:25:28.079797   34413 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:25:28.282779   34413 system_pods.go:86] 7 kube-system pods found
	I1128 00:25:28.282803   34413 system_pods.go:89] "coredns-6d4b75cb6d-w6fgj" [84d548a8-d578-4d10-8273-fe94be98c5f8] Running
	I1128 00:25:28.282810   34413 system_pods.go:89] "etcd-test-preload-280327" [956f1118-b25b-451e-a1df-11f0c025ac45] Running
	I1128 00:25:28.282816   34413 system_pods.go:89] "kube-apiserver-test-preload-280327" [53d50b0c-cf59-4c5a-bd71-d46a936aeaf2] Running
	I1128 00:25:28.282822   34413 system_pods.go:89] "kube-controller-manager-test-preload-280327" [53d8a392-e0a5-41de-8ad3-703b963d7a89] Running
	I1128 00:25:28.282828   34413 system_pods.go:89] "kube-proxy-7ld42" [3e2ac43e-c359-41c3-9bc6-8acdb338ae3b] Running
	I1128 00:25:28.282834   34413 system_pods.go:89] "kube-scheduler-test-preload-280327" [4b0d8e12-db1e-4f49-9464-6cfc34465732] Running
	I1128 00:25:28.282839   34413 system_pods.go:89] "storage-provisioner" [3dc47545-ed6a-4502-b893-23977ad84222] Running
	I1128 00:25:28.282848   34413 system_pods.go:126] duration metric: took 203.045546ms to wait for k8s-apps to be running ...
	I1128 00:25:28.282860   34413 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:25:28.282905   34413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:25:28.297649   34413 system_svc.go:56] duration metric: took 14.777348ms WaitForService to wait for kubelet.
	I1128 00:25:28.297686   34413 kubeadm.go:581] duration metric: took 12.00551903s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:25:28.297714   34413 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:25:28.480551   34413 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:25:28.480580   34413 node_conditions.go:123] node cpu capacity is 2
	I1128 00:25:28.480592   34413 node_conditions.go:105] duration metric: took 182.873143ms to run NodePressure ...
	I1128 00:25:28.480603   34413 start.go:228] waiting for startup goroutines ...
	I1128 00:25:28.480609   34413 start.go:233] waiting for cluster config update ...
	I1128 00:25:28.480618   34413 start.go:242] writing updated cluster config ...
	I1128 00:25:28.480963   34413 ssh_runner.go:195] Run: rm -f paused
	I1128 00:25:28.526027   34413 start.go:600] kubectl: 1.28.4, cluster: 1.24.4 (minor skew: 4)
	I1128 00:25:28.527525   34413 out.go:177] 
	W1128 00:25:28.528995   34413 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.24.4.
	I1128 00:25:28.530423   34413 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1128 00:25:28.531919   34413 out.go:177] * Done! kubectl is now configured to use "test-preload-280327" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:24:31 UTC, ends at Tue 2023-11-28 00:25:29 UTC. --
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.521092469Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701131129521081887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=aa46a381-0c69-412e-8d7c-34a4496ae214 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.521901772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=33401b27-34a3-41c2-83e3-a59c041e7e7b name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.521944545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=33401b27-34a3-41c2-83e3-a59c041e7e7b name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.522086253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04c6f996fbb238401dd971b3c84ddfc567364f9f56acd27406d217bb36c933e,PodSandboxId:07cf300837d8c096faf04926ea772c6a926f174ae33d37547f49b326ffa472fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1701131118506620995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w6fgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d548a8-d578-4d10-8273-fe94be98c5f8,},Annotations:map[string]string{io.kubernetes.container.hash: bb02b47d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0cdd38bc77477cd5be80c5be0b31b856e410003f2ce2044b5cc839424e473f,PodSandboxId:5f76d5a58344647f17595115853bb7bd8149607edaa44be279969e397fed3c8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701131115745780592,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3dc47545-ed6a-4502-b893-23977ad84222,},Annotations:map[string]string{io.kubernetes.container.hash: b1ecdbee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34873e3941cf31d10c9a59dc50524e1450121afd991970e60ddfc7f837bdf56,PodSandboxId:fa02c6a6eb6798c4d824556e441747ba6e084186b6a2748ae102806f31b732c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1701131115055396513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ld42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3e2ac43e-c359-41c3-9bc6-8acdb338ae3b,},Annotations:map[string]string{io.kubernetes.container.hash: d71b9a3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58e0b184dfdf42bca5890c1bb0e428ae7f7c3030324b48f19c0de9c8c951c21a,PodSandboxId:63f4eafbe1b77abbade9a274070bbabfadab6af695051b5d3c566866b7949f80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1701131107474373763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c734cf4c
35b5dac62a1488965e74f7,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc4798bd3b7a064dd7967f877a8574b93ecae24cbcfd5e2e1a12442844e38bd,PodSandboxId:7873f6c08f84f87f566fd849273fd6cb442ae61d51e8a6e66613a6a43e809fd3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1701131106922326324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b0c2b49bb6b5d9bbeb16c599350fe73,},Annotations:map[string]string
{io.kubernetes.container.hash: 63aa125a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30949f9b4d9412d041feb0041bad4aa1dcbe9ec66d37dad9598d4b1c1b96b23,PodSandboxId:170ce3575188399be7cc9268e62d5a2ad97ea7637311a7c402325e8647551887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1701131106853713143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716718783d9e53cac240f963a3e9912b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e91a766370b3b745542d493c964a79c922a0e3f0decaf4db18c77445d24057,PodSandboxId:d5a86156dce50053a01365352c87387addf2fdb11afb0e7f325ca0bc81739886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1701131106676492024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161ef3d80f518cc01736d39d89cd345b,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bc4b0ae7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=33401b27-34a3-41c2-83e3-a59c041e7e7b name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.558782000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=24294f70-6d92-459d-9d52-ea33c939e299 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.558847069Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=24294f70-6d92-459d-9d52-ea33c939e299 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.559869276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f001a372-cf97-425c-82a5-c39513c28320 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.560327851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701131129560313254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=f001a372-cf97-425c-82a5-c39513c28320 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.560808513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6deb0c34-a565-4b80-a3fa-0e8ab122cddb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.560853283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6deb0c34-a565-4b80-a3fa-0e8ab122cddb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.560999454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04c6f996fbb238401dd971b3c84ddfc567364f9f56acd27406d217bb36c933e,PodSandboxId:07cf300837d8c096faf04926ea772c6a926f174ae33d37547f49b326ffa472fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1701131118506620995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w6fgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d548a8-d578-4d10-8273-fe94be98c5f8,},Annotations:map[string]string{io.kubernetes.container.hash: bb02b47d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0cdd38bc77477cd5be80c5be0b31b856e410003f2ce2044b5cc839424e473f,PodSandboxId:5f76d5a58344647f17595115853bb7bd8149607edaa44be279969e397fed3c8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701131115745780592,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3dc47545-ed6a-4502-b893-23977ad84222,},Annotations:map[string]string{io.kubernetes.container.hash: b1ecdbee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34873e3941cf31d10c9a59dc50524e1450121afd991970e60ddfc7f837bdf56,PodSandboxId:fa02c6a6eb6798c4d824556e441747ba6e084186b6a2748ae102806f31b732c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1701131115055396513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ld42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3e2ac43e-c359-41c3-9bc6-8acdb338ae3b,},Annotations:map[string]string{io.kubernetes.container.hash: d71b9a3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58e0b184dfdf42bca5890c1bb0e428ae7f7c3030324b48f19c0de9c8c951c21a,PodSandboxId:63f4eafbe1b77abbade9a274070bbabfadab6af695051b5d3c566866b7949f80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1701131107474373763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c734cf4c
35b5dac62a1488965e74f7,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc4798bd3b7a064dd7967f877a8574b93ecae24cbcfd5e2e1a12442844e38bd,PodSandboxId:7873f6c08f84f87f566fd849273fd6cb442ae61d51e8a6e66613a6a43e809fd3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1701131106922326324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b0c2b49bb6b5d9bbeb16c599350fe73,},Annotations:map[string]string
{io.kubernetes.container.hash: 63aa125a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30949f9b4d9412d041feb0041bad4aa1dcbe9ec66d37dad9598d4b1c1b96b23,PodSandboxId:170ce3575188399be7cc9268e62d5a2ad97ea7637311a7c402325e8647551887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1701131106853713143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716718783d9e53cac240f963a3e9912b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e91a766370b3b745542d493c964a79c922a0e3f0decaf4db18c77445d24057,PodSandboxId:d5a86156dce50053a01365352c87387addf2fdb11afb0e7f325ca0bc81739886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1701131106676492024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161ef3d80f518cc01736d39d89cd345b,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bc4b0ae7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6deb0c34-a565-4b80-a3fa-0e8ab122cddb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.600035251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=63a7ed40-ec84-4559-9a06-9219333df5e4 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.600091260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=63a7ed40-ec84-4559-9a06-9219333df5e4 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.601356116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9a51b26a-fb49-4b3e-a1b7-b44fab6f2ae4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.601831507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701131129601818012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=9a51b26a-fb49-4b3e-a1b7-b44fab6f2ae4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.602528129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5dc8aa03-529a-45f5-8a08-155acd89a376 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.602575461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5dc8aa03-529a-45f5-8a08-155acd89a376 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.602714703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04c6f996fbb238401dd971b3c84ddfc567364f9f56acd27406d217bb36c933e,PodSandboxId:07cf300837d8c096faf04926ea772c6a926f174ae33d37547f49b326ffa472fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1701131118506620995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w6fgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d548a8-d578-4d10-8273-fe94be98c5f8,},Annotations:map[string]string{io.kubernetes.container.hash: bb02b47d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0cdd38bc77477cd5be80c5be0b31b856e410003f2ce2044b5cc839424e473f,PodSandboxId:5f76d5a58344647f17595115853bb7bd8149607edaa44be279969e397fed3c8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701131115745780592,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3dc47545-ed6a-4502-b893-23977ad84222,},Annotations:map[string]string{io.kubernetes.container.hash: b1ecdbee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34873e3941cf31d10c9a59dc50524e1450121afd991970e60ddfc7f837bdf56,PodSandboxId:fa02c6a6eb6798c4d824556e441747ba6e084186b6a2748ae102806f31b732c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1701131115055396513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ld42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3e2ac43e-c359-41c3-9bc6-8acdb338ae3b,},Annotations:map[string]string{io.kubernetes.container.hash: d71b9a3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58e0b184dfdf42bca5890c1bb0e428ae7f7c3030324b48f19c0de9c8c951c21a,PodSandboxId:63f4eafbe1b77abbade9a274070bbabfadab6af695051b5d3c566866b7949f80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1701131107474373763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c734cf4c
35b5dac62a1488965e74f7,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc4798bd3b7a064dd7967f877a8574b93ecae24cbcfd5e2e1a12442844e38bd,PodSandboxId:7873f6c08f84f87f566fd849273fd6cb442ae61d51e8a6e66613a6a43e809fd3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1701131106922326324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b0c2b49bb6b5d9bbeb16c599350fe73,},Annotations:map[string]string
{io.kubernetes.container.hash: 63aa125a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30949f9b4d9412d041feb0041bad4aa1dcbe9ec66d37dad9598d4b1c1b96b23,PodSandboxId:170ce3575188399be7cc9268e62d5a2ad97ea7637311a7c402325e8647551887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1701131106853713143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716718783d9e53cac240f963a3e9912b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e91a766370b3b745542d493c964a79c922a0e3f0decaf4db18c77445d24057,PodSandboxId:d5a86156dce50053a01365352c87387addf2fdb11afb0e7f325ca0bc81739886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1701131106676492024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161ef3d80f518cc01736d39d89cd345b,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bc4b0ae7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5dc8aa03-529a-45f5-8a08-155acd89a376 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.633313605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=92630bc9-f027-495a-9542-6754a5bcb103 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.633367771Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=92630bc9-f027-495a-9542-6754a5bcb103 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.634492526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cb6e9d4e-a2e4-47b9-841b-fc74ae2bcbef name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.634886557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701131129634873410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=cb6e9d4e-a2e4-47b9-841b-fc74ae2bcbef name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.635463760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f668a43e-8cd6-4f76-9a33-23dc72163bf2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.635505749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f668a43e-8cd6-4f76-9a33-23dc72163bf2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:25:29 test-preload-280327 crio[717]: time="2023-11-28 00:25:29.635650807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f04c6f996fbb238401dd971b3c84ddfc567364f9f56acd27406d217bb36c933e,PodSandboxId:07cf300837d8c096faf04926ea772c6a926f174ae33d37547f49b326ffa472fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1701131118506620995,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w6fgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d548a8-d578-4d10-8273-fe94be98c5f8,},Annotations:map[string]string{io.kubernetes.container.hash: bb02b47d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0cdd38bc77477cd5be80c5be0b31b856e410003f2ce2044b5cc839424e473f,PodSandboxId:5f76d5a58344647f17595115853bb7bd8149607edaa44be279969e397fed3c8b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701131115745780592,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 3dc47545-ed6a-4502-b893-23977ad84222,},Annotations:map[string]string{io.kubernetes.container.hash: b1ecdbee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34873e3941cf31d10c9a59dc50524e1450121afd991970e60ddfc7f837bdf56,PodSandboxId:fa02c6a6eb6798c4d824556e441747ba6e084186b6a2748ae102806f31b732c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1701131115055396513,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7ld42,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3e2ac43e-c359-41c3-9bc6-8acdb338ae3b,},Annotations:map[string]string{io.kubernetes.container.hash: d71b9a3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58e0b184dfdf42bca5890c1bb0e428ae7f7c3030324b48f19c0de9c8c951c21a,PodSandboxId:63f4eafbe1b77abbade9a274070bbabfadab6af695051b5d3c566866b7949f80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1701131107474373763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83c734cf4c
35b5dac62a1488965e74f7,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc4798bd3b7a064dd7967f877a8574b93ecae24cbcfd5e2e1a12442844e38bd,PodSandboxId:7873f6c08f84f87f566fd849273fd6cb442ae61d51e8a6e66613a6a43e809fd3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1701131106922326324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b0c2b49bb6b5d9bbeb16c599350fe73,},Annotations:map[string]string
{io.kubernetes.container.hash: 63aa125a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e30949f9b4d9412d041feb0041bad4aa1dcbe9ec66d37dad9598d4b1c1b96b23,PodSandboxId:170ce3575188399be7cc9268e62d5a2ad97ea7637311a7c402325e8647551887,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1701131106853713143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 716718783d9e53cac240f963a3e9912b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e91a766370b3b745542d493c964a79c922a0e3f0decaf4db18c77445d24057,PodSandboxId:d5a86156dce50053a01365352c87387addf2fdb11afb0e7f325ca0bc81739886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1701131106676492024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-280327,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 161ef3d80f518cc01736d39d89cd345b,},Annotations:map[strin
g]string{io.kubernetes.container.hash: bc4b0ae7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f668a43e-8cd6-4f76-9a33-23dc72163bf2 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f04c6f996fbb2       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   11 seconds ago      Running             coredns                   1                   07cf300837d8c       coredns-6d4b75cb6d-w6fgj
	6c0cdd38bc774       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   5f76d5a583446       storage-provisioner
	b34873e3941cf       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   fa02c6a6eb679       kube-proxy-7ld42
	58e0b184dfdf4       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   63f4eafbe1b77       kube-scheduler-test-preload-280327
	7cc4798bd3b7a       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   7873f6c08f84f       etcd-test-preload-280327
	e30949f9b4d94       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   170ce35751883       kube-controller-manager-test-preload-280327
	b9e91a766370b       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   23 seconds ago      Running             kube-apiserver            1                   d5a86156dce50       kube-apiserver-test-preload-280327
	
	* 
	* ==> coredns [f04c6f996fbb238401dd971b3c84ddfc567364f9f56acd27406d217bb36c933e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:56163 - 6033 "HINFO IN 913498354780840977.4626418545229741769. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020456418s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-280327
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-280327
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=test-preload-280327
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_23_35_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:23:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-280327
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 00:25:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:25:23 +0000   Tue, 28 Nov 2023 00:23:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:25:23 +0000   Tue, 28 Nov 2023 00:23:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:25:23 +0000   Tue, 28 Nov 2023 00:23:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:25:23 +0000   Tue, 28 Nov 2023 00:25:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    test-preload-280327
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f24df3c5de8467799f78f6ce3e7a772
	  System UUID:                4f24df3c-5de8-4677-99f7-8f6ce3e7a772
	  Boot ID:                    d88a1d0d-0d3c-4cf7-8f1f-10096386c7f4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-w6fgj                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     101s
	  kube-system                 etcd-test-preload-280327                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         114s
	  kube-system                 kube-apiserver-test-preload-280327             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-test-preload-280327    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-proxy-7ld42                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-scheduler-test-preload-280327             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s (x6 over 2m4s)  kubelet          Node test-preload-280327 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x5 over 2m4s)  kubelet          Node test-preload-280327 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x5 over 2m4s)  kubelet          Node test-preload-280327 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node test-preload-280327 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node test-preload-280327 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node test-preload-280327 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                104s                 kubelet          Node test-preload-280327 status is now: NodeReady
	  Normal  RegisteredNode           102s                 node-controller  Node test-preload-280327 event: Registered Node test-preload-280327 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node test-preload-280327 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node test-preload-280327 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node test-preload-280327 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-280327 event: Registered Node test-preload-280327 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 00:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066876] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.310925] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.342306] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146496] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.437044] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.556426] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.102860] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.146993] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.105237] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.218659] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[Nov28 00:25] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[ +10.051480] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.293799] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [7cc4798bd3b7a064dd7967f877a8574b93ecae24cbcfd5e2e1a12442844e38bd] <==
	* {"level":"info","ts":"2023-11-28T00:25:08.922Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"be5e8f7004ae306c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-11-28T00:25:08.922Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-11-28T00:25:08.922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c switched to configuration voters=(13717559226294743148)"}
	{"level":"info","ts":"2023-11-28T00:25:08.923Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"beed476d98f529f8","local-member-id":"be5e8f7004ae306c","added-peer-id":"be5e8f7004ae306c","added-peer-peer-urls":["https://192.168.39.42:2380"]}
	{"level":"info","ts":"2023-11-28T00:25:08.923Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"beed476d98f529f8","local-member-id":"be5e8f7004ae306c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:25:08.923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:25:08.937Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.42:2380"}
	{"level":"info","ts":"2023-11-28T00:25:08.938Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.42:2380"}
	{"level":"info","ts":"2023-11-28T00:25:08.938Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T00:25:08.938Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"be5e8f7004ae306c","initial-advertise-peer-urls":["https://192.168.39.42:2380"],"listen-peer-urls":["https://192.168.39.42:2380"],"advertise-client-urls":["https://192.168.39.42:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.42:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T00:25:08.938Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T00:25:10.501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-28T00:25:10.501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-28T00:25:10.501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c received MsgPreVoteResp from be5e8f7004ae306c at term 2"}
	{"level":"info","ts":"2023-11-28T00:25:10.501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c became candidate at term 3"}
	{"level":"info","ts":"2023-11-28T00:25:10.501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c received MsgVoteResp from be5e8f7004ae306c at term 3"}
	{"level":"info","ts":"2023-11-28T00:25:10.501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be5e8f7004ae306c became leader at term 3"}
	{"level":"info","ts":"2023-11-28T00:25:10.501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be5e8f7004ae306c elected leader be5e8f7004ae306c at term 3"}
	{"level":"info","ts":"2023-11-28T00:25:10.503Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"be5e8f7004ae306c","local-member-attributes":"{Name:test-preload-280327 ClientURLs:[https://192.168.39.42:2379]}","request-path":"/0/members/be5e8f7004ae306c/attributes","cluster-id":"beed476d98f529f8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T00:25:10.503Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:25:10.504Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T00:25:10.504Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:25:10.505Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.42:2379"}
	{"level":"info","ts":"2023-11-28T00:25:10.505Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T00:25:10.505Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:25:29 up 1 min,  0 users,  load average: 0.79, 0.24, 0.08
	Linux test-preload-280327 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b9e91a766370b3b745542d493c964a79c922a0e3f0decaf4db18c77445d24057] <==
	* I1128 00:25:13.027794       1 controller.go:85] Starting OpenAPI V3 controller
	I1128 00:25:13.027831       1 naming_controller.go:291] Starting NamingConditionController
	I1128 00:25:13.028184       1 establishing_controller.go:76] Starting EstablishingController
	I1128 00:25:13.028523       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1128 00:25:13.028569       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1128 00:25:13.028600       1 crd_finalizer.go:266] Starting CRDFinalizer
	E1128 00:25:13.112467       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1128 00:25:13.168554       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1128 00:25:13.170126       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1128 00:25:13.172707       1 cache.go:39] Caches are synced for autoregister controller
	I1128 00:25:13.172874       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1128 00:25:13.173165       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1128 00:25:13.186443       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1128 00:25:13.190702       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1128 00:25:13.195362       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 00:25:13.655601       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1128 00:25:13.972855       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1128 00:25:14.809555       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1128 00:25:14.821661       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1128 00:25:14.871600       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1128 00:25:14.895868       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 00:25:14.913675       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1128 00:25:15.458617       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1128 00:25:25.669034       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1128 00:25:25.671047       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [e30949f9b4d9412d041feb0041bad4aa1dcbe9ec66d37dad9598d4b1c1b96b23] <==
	* I1128 00:25:25.650747       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1128 00:25:25.653075       1 shared_informer.go:262] Caches are synced for TTL
	I1128 00:25:25.653561       1 shared_informer.go:262] Caches are synced for job
	I1128 00:25:25.655314       1 shared_informer.go:262] Caches are synced for endpoint
	I1128 00:25:25.658505       1 shared_informer.go:262] Caches are synced for PVC protection
	I1128 00:25:25.659710       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1128 00:25:25.672060       1 shared_informer.go:262] Caches are synced for daemon sets
	I1128 00:25:25.672181       1 shared_informer.go:262] Caches are synced for expand
	I1128 00:25:25.681044       1 shared_informer.go:262] Caches are synced for persistent volume
	I1128 00:25:25.694358       1 shared_informer.go:262] Caches are synced for taint
	I1128 00:25:25.694628       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1128 00:25:25.696082       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1128 00:25:25.702517       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1128 00:25:25.702607       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-280327. Assuming now as a timestamp.
	I1128 00:25:25.702673       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1128 00:25:25.703014       1 shared_informer.go:262] Caches are synced for disruption
	I1128 00:25:25.703075       1 disruption.go:371] Sending events to api server.
	I1128 00:25:25.703354       1 event.go:294] "Event occurred" object="test-preload-280327" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-280327 event: Registered Node test-preload-280327 in Controller"
	I1128 00:25:25.724429       1 shared_informer.go:262] Caches are synced for stateful set
	I1128 00:25:25.729286       1 shared_informer.go:262] Caches are synced for ephemeral
	I1128 00:25:25.821879       1 shared_informer.go:262] Caches are synced for resource quota
	I1128 00:25:25.844809       1 shared_informer.go:262] Caches are synced for resource quota
	I1128 00:25:26.268326       1 shared_informer.go:262] Caches are synced for garbage collector
	I1128 00:25:26.268399       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1128 00:25:26.323941       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [b34873e3941cf31d10c9a59dc50524e1450121afd991970e60ddfc7f837bdf56] <==
	* I1128 00:25:15.397258       1 node.go:163] Successfully retrieved node IP: 192.168.39.42
	I1128 00:25:15.397414       1 server_others.go:138] "Detected node IP" address="192.168.39.42"
	I1128 00:25:15.397556       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1128 00:25:15.446989       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1128 00:25:15.447058       1 server_others.go:206] "Using iptables Proxier"
	I1128 00:25:15.447094       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1128 00:25:15.448975       1 server.go:661] "Version info" version="v1.24.4"
	I1128 00:25:15.449042       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:25:15.449747       1 config.go:317] "Starting service config controller"
	I1128 00:25:15.449792       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1128 00:25:15.449971       1 config.go:226] "Starting endpoint slice config controller"
	I1128 00:25:15.449999       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1128 00:25:15.455505       1 config.go:444] "Starting node config controller"
	I1128 00:25:15.455544       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1128 00:25:15.551065       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1128 00:25:15.551175       1 shared_informer.go:262] Caches are synced for service config
	I1128 00:25:15.556091       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [58e0b184dfdf42bca5890c1bb0e428ae7f7c3030324b48f19c0de9c8c951c21a] <==
	* I1128 00:25:10.319957       1 serving.go:348] Generated self-signed cert in-memory
	W1128 00:25:13.044418       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 00:25:13.044539       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:25:13.044552       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 00:25:13.044561       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 00:25:13.095661       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1128 00:25:13.095790       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:25:13.104315       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1128 00:25:13.104801       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 00:25:13.104966       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 00:25:13.105100       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 00:25:13.205401       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:24:31 UTC, ends at Tue 2023-11-28 00:25:30 UTC. --
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781373    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e2ac43e-c359-41c3-9bc6-8acdb338ae3b-xtables-lock\") pod \"kube-proxy-7ld42\" (UID: \"3e2ac43e-c359-41c3-9bc6-8acdb338ae3b\") " pod="kube-system/kube-proxy-7ld42"
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781417    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479bd\" (UniqueName: \"kubernetes.io/projected/3e2ac43e-c359-41c3-9bc6-8acdb338ae3b-kube-api-access-479bd\") pod \"kube-proxy-7ld42\" (UID: \"3e2ac43e-c359-41c3-9bc6-8acdb338ae3b\") " pod="kube-system/kube-proxy-7ld42"
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781441    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtz5\" (UniqueName: \"kubernetes.io/projected/84d548a8-d578-4d10-8273-fe94be98c5f8-kube-api-access-zmtz5\") pod \"coredns-6d4b75cb6d-w6fgj\" (UID: \"84d548a8-d578-4d10-8273-fe94be98c5f8\") " pod="kube-system/coredns-6d4b75cb6d-w6fgj"
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781461    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3dc47545-ed6a-4502-b893-23977ad84222-tmp\") pod \"storage-provisioner\" (UID: \"3dc47545-ed6a-4502-b893-23977ad84222\") " pod="kube-system/storage-provisioner"
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781585    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84d548a8-d578-4d10-8273-fe94be98c5f8-config-volume\") pod \"coredns-6d4b75cb6d-w6fgj\" (UID: \"84d548a8-d578-4d10-8273-fe94be98c5f8\") " pod="kube-system/coredns-6d4b75cb6d-w6fgj"
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781639    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxfq\" (UniqueName: \"kubernetes.io/projected/3dc47545-ed6a-4502-b893-23977ad84222-kube-api-access-xnxfq\") pod \"storage-provisioner\" (UID: \"3dc47545-ed6a-4502-b893-23977ad84222\") " pod="kube-system/storage-provisioner"
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781682    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e2ac43e-c359-41c3-9bc6-8acdb338ae3b-kube-proxy\") pod \"kube-proxy-7ld42\" (UID: \"3e2ac43e-c359-41c3-9bc6-8acdb338ae3b\") " pod="kube-system/kube-proxy-7ld42"
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781709    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e2ac43e-c359-41c3-9bc6-8acdb338ae3b-lib-modules\") pod \"kube-proxy-7ld42\" (UID: \"3e2ac43e-c359-41c3-9bc6-8acdb338ae3b\") " pod="kube-system/kube-proxy-7ld42"
	Nov 28 00:25:13 test-preload-280327 kubelet[1104]: I1128 00:25:13.781753    1104 reconciler.go:159] "Reconciler: start to sync state"
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: I1128 00:25:14.192547    1104 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f-config-volume\") pod \"6acc0e80-c6dc-4798-9777-0ebb1ae1e84f\" (UID: \"6acc0e80-c6dc-4798-9777-0ebb1ae1e84f\") "
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: I1128 00:25:14.192618    1104 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t48d\" (UniqueName: \"kubernetes.io/projected/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f-kube-api-access-5t48d\") pod \"6acc0e80-c6dc-4798-9777-0ebb1ae1e84f\" (UID: \"6acc0e80-c6dc-4798-9777-0ebb1ae1e84f\") "
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: E1128 00:25:14.193771    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: E1128 00:25:14.193886    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/84d548a8-d578-4d10-8273-fe94be98c5f8-config-volume podName:84d548a8-d578-4d10-8273-fe94be98c5f8 nodeName:}" failed. No retries permitted until 2023-11-28 00:25:14.693854939 +0000 UTC m=+9.243393111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/84d548a8-d578-4d10-8273-fe94be98c5f8-config-volume") pod "coredns-6d4b75cb6d-w6fgj" (UID: "84d548a8-d578-4d10-8273-fe94be98c5f8") : object "kube-system"/"coredns" not registered
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: W1128 00:25:14.195731    1104 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f/volumes/kubernetes.io~projected/kube-api-access-5t48d: clearQuota called, but quotas disabled
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: I1128 00:25:14.196042    1104 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f-kube-api-access-5t48d" (OuterVolumeSpecName: "kube-api-access-5t48d") pod "6acc0e80-c6dc-4798-9777-0ebb1ae1e84f" (UID: "6acc0e80-c6dc-4798-9777-0ebb1ae1e84f"). InnerVolumeSpecName "kube-api-access-5t48d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: W1128 00:25:14.196290    1104 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: I1128 00:25:14.197164    1104 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f-config-volume" (OuterVolumeSpecName: "config-volume") pod "6acc0e80-c6dc-4798-9777-0ebb1ae1e84f" (UID: "6acc0e80-c6dc-4798-9777-0ebb1ae1e84f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: I1128 00:25:14.293852    1104 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f-config-volume\") on node \"test-preload-280327\" DevicePath \"\""
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: I1128 00:25:14.293885    1104 reconciler.go:384] "Volume detached for volume \"kube-api-access-5t48d\" (UniqueName: \"kubernetes.io/projected/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f-kube-api-access-5t48d\") on node \"test-preload-280327\" DevicePath \"\""
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: E1128 00:25:14.696880    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: E1128 00:25:14.696944    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/84d548a8-d578-4d10-8273-fe94be98c5f8-config-volume podName:84d548a8-d578-4d10-8273-fe94be98c5f8 nodeName:}" failed. No retries permitted until 2023-11-28 00:25:15.696929097 +0000 UTC m=+10.246467279 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/84d548a8-d578-4d10-8273-fe94be98c5f8-config-volume") pod "coredns-6d4b75cb6d-w6fgj" (UID: "84d548a8-d578-4d10-8273-fe94be98c5f8") : object "kube-system"/"coredns" not registered
	Nov 28 00:25:14 test-preload-280327 kubelet[1104]: E1128 00:25:14.716058    1104 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-w6fgj" podUID=84d548a8-d578-4d10-8273-fe94be98c5f8
	Nov 28 00:25:15 test-preload-280327 kubelet[1104]: E1128 00:25:15.702949    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 28 00:25:15 test-preload-280327 kubelet[1104]: E1128 00:25:15.703155    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/84d548a8-d578-4d10-8273-fe94be98c5f8-config-volume podName:84d548a8-d578-4d10-8273-fe94be98c5f8 nodeName:}" failed. No retries permitted until 2023-11-28 00:25:17.703049718 +0000 UTC m=+12.252587891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/84d548a8-d578-4d10-8273-fe94be98c5f8-config-volume") pod "coredns-6d4b75cb6d-w6fgj" (UID: "84d548a8-d578-4d10-8273-fe94be98c5f8") : object "kube-system"/"coredns" not registered
	Nov 28 00:25:15 test-preload-280327 kubelet[1104]: I1128 00:25:15.724692    1104 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6acc0e80-c6dc-4798-9777-0ebb1ae1e84f path="/var/lib/kubelet/pods/6acc0e80-c6dc-4798-9777-0ebb1ae1e84f/volumes"
	
	* 
	* ==> storage-provisioner [6c0cdd38bc77477cd5be80c5be0b31b856e410003f2ce2044b5cc839424e473f] <==
	* I1128 00:25:15.845155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:25:15.854904       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:25:15.854989       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-280327 -n test-preload-280327
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-280327 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-280327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-280327
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-280327: (1.10671038s)
--- FAIL: TestPreload (254.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3871591042.exe start -p running-upgrade-188202 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1128 00:28:50.988292   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3871591042.exe start -p running-upgrade-188202 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m25.48793888s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-188202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1128 00:30:10.728189   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-188202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (23.361695686s)

                                                
                                                
-- stdout --
	* [running-upgrade-188202] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-188202 in cluster running-upgrade-188202
	* Updating the running kvm2 "running-upgrade-188202" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:29:55.946106   37601 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:29:55.946239   37601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:29:55.946251   37601 out.go:309] Setting ErrFile to fd 2...
	I1128 00:29:55.946255   37601 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:29:55.946463   37601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:29:55.947026   37601 out.go:303] Setting JSON to false
	I1128 00:29:55.948185   37601 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4343,"bootTime":1701127053,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:29:55.948245   37601 start.go:138] virtualization: kvm guest
	I1128 00:29:55.950617   37601 out.go:177] * [running-upgrade-188202] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:29:55.952669   37601 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:29:55.952735   37601 notify.go:220] Checking for updates...
	I1128 00:29:55.954334   37601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:29:55.955667   37601 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:29:55.957215   37601 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:29:55.958611   37601 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:29:55.959828   37601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:29:55.961398   37601 config.go:182] Loaded profile config "running-upgrade-188202": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 00:29:55.961418   37601 start_flags.go:694] config upgrade: Driver=kvm2
	I1128 00:29:55.961426   37601 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 00:29:55.961494   37601 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/running-upgrade-188202/config.json ...
	I1128 00:29:55.962105   37601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:29:55.962154   37601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:29:55.980178   37601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I1128 00:29:55.980594   37601 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:29:55.981164   37601 main.go:141] libmachine: Using API Version  1
	I1128 00:29:55.981195   37601 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:29:55.981537   37601 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:29:55.981667   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:29:55.983714   37601 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 00:29:55.985050   37601 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:29:55.985338   37601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:29:55.985372   37601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:29:56.000220   37601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35255
	I1128 00:29:56.000641   37601 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:29:56.001124   37601 main.go:141] libmachine: Using API Version  1
	I1128 00:29:56.001148   37601 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:29:56.001505   37601 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:29:56.001703   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:29:56.039021   37601 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 00:29:56.040286   37601 start.go:298] selected driver: kvm2
	I1128 00:29:56.040306   37601 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-188202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.239 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 00:29:56.040431   37601 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:29:56.041245   37601 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.041338   37601 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 00:29:56.056855   37601 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 00:29:56.057239   37601 cni.go:84] Creating CNI manager for ""
	I1128 00:29:56.057259   37601 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1128 00:29:56.057270   37601 start_flags.go:323] config:
	{Name:running-upgrade-188202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.239 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 00:29:56.057435   37601 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.059341   37601 out.go:177] * Starting control plane node running-upgrade-188202 in cluster running-upgrade-188202
	I1128 00:29:56.060991   37601 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1128 00:29:56.531217   37601 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1128 00:29:56.531387   37601 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/running-upgrade-188202/config.json ...
	I1128 00:29:56.531525   37601 cache.go:107] acquiring lock: {Name:mkd3cf99a5175d73fa3fee682150b562a82e8a22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.531634   37601 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1128 00:29:56.531628   37601 cache.go:107] acquiring lock: {Name:mkc8dc9694ad3ea8f9e215144c924d429650bf65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.531673   37601 cache.go:107] acquiring lock: {Name:mk8ea84ec10f13b17272f987c9845159d241e0a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.531689   37601 cache.go:107] acquiring lock: {Name:mkc54e9b1163409c26006035decd56e7d33bdc00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.531689   37601 cache.go:107] acquiring lock: {Name:mk9999a07ac4ece564d90e2f6a96734a6581f76d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.531716   37601 cache.go:107] acquiring lock: {Name:mk66bb170155e67d2734b845ddbdf04b35635288 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.531758   37601 cache.go:107] acquiring lock: {Name:mkbc51c40fe10449c21d7aca0195aa1173c09168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.531774   37601 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1128 00:29:56.531822   37601 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1128 00:29:56.531826   37601 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1128 00:29:56.531827   37601 cache.go:107] acquiring lock: {Name:mkda29b5bb252175f29b0ff02804aef107dcec03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:29:56.531849   37601 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1128 00:29:56.531913   37601 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1128 00:29:56.531686   37601 start.go:365] acquiring machines lock for running-upgrade-188202: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:29:56.531805   37601 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1128 00:29:56.532013   37601 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1128 00:29:56.531654   37601 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 148.282µs
	I1128 00:29:56.532068   37601 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1128 00:29:56.533215   37601 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1128 00:29:56.533244   37601 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1128 00:29:56.533259   37601 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1128 00:29:56.533242   37601 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1128 00:29:56.533312   37601 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1128 00:29:56.533335   37601 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1128 00:29:56.533379   37601 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1128 00:29:56.664516   37601 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1128 00:29:56.666681   37601 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1128 00:29:56.685165   37601 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1128 00:29:56.701355   37601 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1128 00:29:56.701486   37601 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1128 00:29:56.709150   37601 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1128 00:29:56.710797   37601 cache.go:162] opening:  /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1128 00:29:56.726159   37601 cache.go:157] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1128 00:29:56.726180   37601 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 194.513483ms
	I1128 00:29:56.726191   37601 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1128 00:29:57.288079   37601 cache.go:157] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1128 00:29:57.288110   37601 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 756.422203ms
	I1128 00:29:57.288131   37601 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1128 00:29:57.614784   37601 cache.go:157] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1128 00:29:57.614823   37601 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.083066615s
	I1128 00:29:57.614866   37601 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1128 00:29:57.781007   37601 cache.go:157] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1128 00:29:57.781034   37601 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.249319667s
	I1128 00:29:57.781050   37601 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1128 00:29:57.803035   37601 cache.go:157] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1128 00:29:57.803065   37601 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.27124116s
	I1128 00:29:57.803076   37601 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1128 00:29:58.379203   37601 cache.go:157] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1128 00:29:58.379233   37601 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.84757657s
	I1128 00:29:58.379253   37601 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1128 00:29:58.425006   37601 cache.go:157] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1128 00:29:58.425038   37601 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.893436602s
	I1128 00:29:58.425053   37601 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1128 00:29:58.425074   37601 cache.go:87] Successfully saved all images to host disk.
	I1128 00:30:14.931220   37601 start.go:369] acquired machines lock for "running-upgrade-188202" in 18.399231379s
	I1128 00:30:14.931287   37601 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:30:14.931303   37601 fix.go:54] fixHost starting: minikube
	I1128 00:30:14.931769   37601 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 00:30:14.931820   37601 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:30:14.950146   37601 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I1128 00:30:14.950671   37601 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:30:14.951152   37601 main.go:141] libmachine: Using API Version  1
	I1128 00:30:14.951179   37601 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:30:14.951554   37601 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:30:14.951735   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:30:14.951922   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetState
	I1128 00:30:14.953946   37601 fix.go:102] recreateIfNeeded on running-upgrade-188202: state=Running err=<nil>
	W1128 00:30:14.953982   37601 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:30:14.956382   37601 out.go:177] * Updating the running kvm2 "running-upgrade-188202" VM ...
	I1128 00:30:14.957931   37601 machine.go:88] provisioning docker machine ...
	I1128 00:30:14.957962   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:30:14.958209   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetMachineName
	I1128 00:30:14.958392   37601 buildroot.go:166] provisioning hostname "running-upgrade-188202"
	I1128 00:30:14.958410   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetMachineName
	I1128 00:30:14.958565   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:14.961622   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:14.962123   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:14.962155   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:14.962356   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHPort
	I1128 00:30:14.962588   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:14.962759   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:14.962911   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHUsername
	I1128 00:30:14.963080   37601 main.go:141] libmachine: Using SSH client type: native
	I1128 00:30:14.963607   37601 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1128 00:30:14.963631   37601 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-188202 && echo "running-upgrade-188202" | sudo tee /etc/hostname
	I1128 00:30:15.121439   37601 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-188202
	
	I1128 00:30:15.121479   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:15.124915   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.125472   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:15.125512   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.125979   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHPort
	I1128 00:30:15.126181   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:15.126389   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:15.126605   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHUsername
	I1128 00:30:15.126833   37601 main.go:141] libmachine: Using SSH client type: native
	I1128 00:30:15.127383   37601 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1128 00:30:15.127418   37601 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-188202' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-188202/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-188202' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:30:15.271234   37601 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:30:15.271273   37601 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:30:15.271292   37601 buildroot.go:174] setting up certificates
	I1128 00:30:15.271305   37601 provision.go:83] configureAuth start
	I1128 00:30:15.271317   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetMachineName
	I1128 00:30:15.271615   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetIP
	I1128 00:30:15.275277   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.275719   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:15.275743   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.276053   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:15.278991   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.279513   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:15.279559   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.279808   37601 provision.go:138] copyHostCerts
	I1128 00:30:15.279868   37601 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:30:15.279881   37601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:30:15.279949   37601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:30:15.280068   37601 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:30:15.280079   37601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:30:15.280107   37601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:30:15.280190   37601 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:30:15.280201   37601 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:30:15.280228   37601 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:30:15.280306   37601 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-188202 san=[192.168.50.239 192.168.50.239 localhost 127.0.0.1 minikube running-upgrade-188202]
	I1128 00:30:15.363500   37601 provision.go:172] copyRemoteCerts
	I1128 00:30:15.363581   37601 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:30:15.363611   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:15.367396   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.367918   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:15.367961   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.368205   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHPort
	I1128 00:30:15.368417   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:15.368589   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHUsername
	I1128 00:30:15.368747   37601 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/running-upgrade-188202/id_rsa Username:docker}
	I1128 00:30:15.471157   37601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:30:15.489981   37601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:30:15.506507   37601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:30:15.523849   37601 provision.go:86] duration metric: configureAuth took 252.530633ms
	I1128 00:30:15.523883   37601 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:30:15.524093   37601 config.go:182] Loaded profile config "running-upgrade-188202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 00:30:15.524191   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:15.527260   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.527744   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:15.527775   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:15.528128   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHPort
	I1128 00:30:15.528351   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:15.528532   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:15.528703   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHUsername
	I1128 00:30:15.528912   37601 main.go:141] libmachine: Using SSH client type: native
	I1128 00:30:15.529391   37601 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1128 00:30:15.529416   37601 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:30:16.357888   37601 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:30:16.357907   37601 machine.go:91] provisioned docker machine in 1.399959114s
	I1128 00:30:16.357915   37601 start.go:300] post-start starting for "running-upgrade-188202" (driver="kvm2")
	I1128 00:30:16.357923   37601 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:30:16.357936   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:30:16.358243   37601 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:30:16.358276   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:16.360981   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:16.558009   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:16.558046   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:16.558596   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHPort
	I1128 00:30:16.558807   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:16.559104   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHUsername
	I1128 00:30:16.559316   37601 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/running-upgrade-188202/id_rsa Username:docker}
	I1128 00:30:16.682247   37601 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:30:16.691357   37601 info.go:137] Remote host: Buildroot 2019.02.7
	I1128 00:30:16.691388   37601 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:30:16.691547   37601 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:30:16.691706   37601 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:30:16.691839   37601 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:30:16.704069   37601 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:30:16.740011   37601 start.go:303] post-start completed in 382.081589ms
	I1128 00:30:16.740039   37601 fix.go:56] fixHost completed within 1.808740975s
	I1128 00:30:16.740063   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:17.194168   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHPort
	I1128 00:30:17.194242   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:17.194265   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:17.194283   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:17.194571   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:17.200871   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:17.201166   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHUsername
	I1128 00:30:17.201428   37601 main.go:141] libmachine: Using SSH client type: native
	I1128 00:30:17.201987   37601 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.239 22 <nil> <nil>}
	I1128 00:30:17.202036   37601 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1128 00:30:17.351382   37601 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701131417.347272402
	
	I1128 00:30:17.351424   37601 fix.go:206] guest clock: 1701131417.347272402
	I1128 00:30:17.351434   37601 fix.go:219] Guest: 2023-11-28 00:30:17.347272402 +0000 UTC Remote: 2023-11-28 00:30:16.740043165 +0000 UTC m=+20.847244037 (delta=607.229237ms)
	I1128 00:30:17.351476   37601 fix.go:190] guest clock delta is within tolerance: 607.229237ms
	I1128 00:30:17.351483   37601 start.go:83] releasing machines lock for "running-upgrade-188202", held for 2.420217355s
	I1128 00:30:17.351531   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:30:17.351857   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetIP
	I1128 00:30:17.355052   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:17.355453   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:17.355477   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:17.355802   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:30:17.356321   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:30:17.356461   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .DriverName
	I1128 00:30:17.356561   37601 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:30:17.356596   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:17.357291   37601 ssh_runner.go:195] Run: cat /version.json
	I1128 00:30:17.357316   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHHostname
	I1128 00:30:17.361090   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:17.361700   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:17.362177   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:17.362202   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:17.362485   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:43:f0", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 01:28:09 +0000 UTC Type:0 Mac:52:54:00:63:43:f0 Iaid: IPaddr:192.168.50.239 Prefix:24 Hostname:running-upgrade-188202 Clientid:01:52:54:00:63:43:f0}
	I1128 00:30:17.362512   37601 main.go:141] libmachine: (running-upgrade-188202) DBG | domain running-upgrade-188202 has defined IP address 192.168.50.239 and MAC address 52:54:00:63:43:f0 in network minikube-net
	I1128 00:30:17.362629   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHPort
	I1128 00:30:17.362746   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:17.362820   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHUsername
	I1128 00:30:17.362907   37601 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/running-upgrade-188202/id_rsa Username:docker}
	I1128 00:30:17.363560   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHPort
	I1128 00:30:17.363668   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHKeyPath
	I1128 00:30:17.363829   37601 main.go:141] libmachine: (running-upgrade-188202) Calling .GetSSHUsername
	I1128 00:30:17.363944   37601 sshutil.go:53] new ssh client: &{IP:192.168.50.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/running-upgrade-188202/id_rsa Username:docker}
	W1128 00:30:17.483309   37601 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1128 00:30:17.483385   37601 ssh_runner.go:195] Run: systemctl --version
	I1128 00:30:17.490547   37601 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:30:17.654910   37601 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:30:17.661949   37601 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:30:17.662034   37601 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:30:17.668749   37601 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 00:30:17.668795   37601 start.go:472] detecting cgroup driver to use...
	I1128 00:30:17.668865   37601 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:30:17.682121   37601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:30:17.692101   37601 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:30:17.692155   37601 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:30:17.701861   37601 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:30:17.711355   37601 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1128 00:30:17.721206   37601 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1128 00:30:17.721273   37601 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:30:17.846549   37601 docker.go:219] disabling docker service ...
	I1128 00:30:17.846632   37601 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:30:18.868243   37601 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.021580435s)
	I1128 00:30:18.868312   37601 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:30:18.882075   37601 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:30:19.028482   37601 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:30:19.203645   37601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:30:19.215532   37601 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:30:19.229545   37601 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 00:30:19.229612   37601 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:30:19.239639   37601 out.go:177] 
	W1128 00:30:19.241491   37601 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1128 00:30:19.241521   37601 out.go:239] * 
	* 
	W1128 00:30:19.242557   37601 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:30:19.244422   37601 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-188202 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-28 00:30:19.265598246 +0000 UTC m=+3920.830624906
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-188202 -n running-upgrade-188202
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-188202 -n running-upgrade-188202: exit status 4 (329.814563ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:30:19.543141   37925 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-188202" does not appear in /home/jenkins/minikube-integration/17206-4749/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-188202" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-188202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-188202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-188202: (1.581353571s)
--- FAIL: TestRunningBinaryUpgrade (172.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (101s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-165445 --driver=kvm2  --container-runtime=crio
E1128 00:31:55.433272   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-165445 --driver=kvm2  --container-runtime=crio: signal: killed (1m40.389779237s)

                                                
                                                
-- stdout --
	* [NoKubernetes-165445] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-165445
	* Restarting existing kvm2 VM for "NoKubernetes-165445" ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-165445 --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-165445 -n NoKubernetes-165445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-165445 -n NoKubernetes-165445: exit status 6 (614.682013ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:32:29.098923   41509 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-165445" does not appear in /home/jenkins/minikube-integration/17206-4749/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-165445" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (101.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (262.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1697282627.exe start -p stopped-upgrade-789586 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1697282627.exe start -p stopped-upgrade-789586 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m9.639770828s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1697282627.exe -p stopped-upgrade-789586 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1697282627.exe -p stopped-upgrade-789586 stop: (1m32.75338923s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-789586 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-789586 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (40.494114432s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-789586] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-789586 in cluster stopped-upgrade-789586
	* Restarting existing kvm2 VM for "stopped-upgrade-789586" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:36:13.945796   43976 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:36:13.946031   43976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:36:13.946040   43976 out.go:309] Setting ErrFile to fd 2...
	I1128 00:36:13.946054   43976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:36:13.946241   43976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:36:13.946784   43976 out.go:303] Setting JSON to false
	I1128 00:36:13.947803   43976 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4721,"bootTime":1701127053,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:36:13.947875   43976 start.go:138] virtualization: kvm guest
	I1128 00:36:13.951243   43976 out.go:177] * [stopped-upgrade-789586] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:36:13.953022   43976 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:36:13.953021   43976 notify.go:220] Checking for updates...
	I1128 00:36:13.954884   43976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:36:13.956986   43976 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:36:13.958634   43976 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:36:13.960273   43976 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:36:13.961729   43976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:36:13.963488   43976 config.go:182] Loaded profile config "stopped-upgrade-789586": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 00:36:13.963504   43976 start_flags.go:694] config upgrade: Driver=kvm2
	I1128 00:36:13.963515   43976 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 00:36:13.963605   43976 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/stopped-upgrade-789586/config.json ...
	I1128 00:36:13.964227   43976 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:36:13.964287   43976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:36:13.979754   43976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I1128 00:36:13.980192   43976 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:36:13.980897   43976 main.go:141] libmachine: Using API Version  1
	I1128 00:36:13.980923   43976 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:36:13.981351   43976 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:36:13.981564   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:13.983535   43976 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 00:36:13.985744   43976 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:36:13.986176   43976 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:36:13.986223   43976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:36:14.001459   43976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
	I1128 00:36:14.001989   43976 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:36:14.002504   43976 main.go:141] libmachine: Using API Version  1
	I1128 00:36:14.002533   43976 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:36:14.002893   43976 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:36:14.003094   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:14.041519   43976 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 00:36:14.042984   43976 start.go:298] selected driver: kvm2
	I1128 00:36:14.043003   43976 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-789586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.72.157 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 00:36:14.043130   43976 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:36:14.044106   43976 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.044196   43976 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 00:36:14.059657   43976 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 00:36:14.060127   43976 cni.go:84] Creating CNI manager for ""
	I1128 00:36:14.060156   43976 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1128 00:36:14.060172   43976 start_flags.go:323] config:
	{Name:stopped-upgrade-789586 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.72.157 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 00:36:14.060420   43976 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.063393   43976 out.go:177] * Starting control plane node stopped-upgrade-789586 in cluster stopped-upgrade-789586
	I1128 00:36:14.065362   43976 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1128 00:36:14.521876   43976 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1128 00:36:14.522044   43976 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/stopped-upgrade-789586/config.json ...
	I1128 00:36:14.522139   43976 cache.go:107] acquiring lock: {Name:mkd3cf99a5175d73fa3fee682150b562a82e8a22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.522179   43976 cache.go:107] acquiring lock: {Name:mkc8dc9694ad3ea8f9e215144c924d429650bf65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.522185   43976 cache.go:107] acquiring lock: {Name:mkda29b5bb252175f29b0ff02804aef107dcec03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.522199   43976 cache.go:107] acquiring lock: {Name:mk8ea84ec10f13b17272f987c9845159d241e0a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.522253   43976 cache.go:107] acquiring lock: {Name:mkc54e9b1163409c26006035decd56e7d33bdc00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.522291   43976 cache.go:107] acquiring lock: {Name:mk66bb170155e67d2734b845ddbdf04b35635288 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.522292   43976 cache.go:107] acquiring lock: {Name:mk9999a07ac4ece564d90e2f6a96734a6581f76d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.522315   43976 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1128 00:36:14.522322   43976 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1128 00:36:14.522333   43976 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1128 00:36:14.522333   43976 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 80.454µs
	I1128 00:36:14.522335   43976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 187.39µs
	I1128 00:36:14.522342   43976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 55.501µs
	I1128 00:36:14.522350   43976 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1128 00:36:14.522352   43976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1128 00:36:14.522351   43976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1128 00:36:14.522231   43976 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1128 00:36:14.522366   43976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 235.979µs
	I1128 00:36:14.522373   43976 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1128 00:36:14.522366   43976 cache.go:107] acquiring lock: {Name:mkbc51c40fe10449c21d7aca0195aa1173c09168 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:36:14.522370   43976 start.go:365] acquiring machines lock for stopped-upgrade-789586: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:36:14.522381   43976 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 200.771µs
	I1128 00:36:14.522390   43976 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1128 00:36:14.522303   43976 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1128 00:36:14.522400   43976 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1128 00:36:14.522403   43976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 248.664µs
	I1128 00:36:14.522406   43976 start.go:369] acquired machines lock for "stopped-upgrade-789586" in 19.646µs
	I1128 00:36:14.522411   43976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1128 00:36:14.522410   43976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 45.28µs
	I1128 00:36:14.522420   43976 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:36:14.522426   43976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1128 00:36:14.522431   43976 fix.go:54] fixHost starting: minikube
	I1128 00:36:14.522353   43976 cache.go:115] /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1128 00:36:14.522523   43976 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 236.31µs
	I1128 00:36:14.522530   43976 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1128 00:36:14.522375   43976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1128 00:36:14.522540   43976 cache.go:87] Successfully saved all images to host disk.
	I1128 00:36:14.522778   43976 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:36:14.522818   43976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:36:14.537776   43976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37101
	I1128 00:36:14.538245   43976 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:36:14.538805   43976 main.go:141] libmachine: Using API Version  1
	I1128 00:36:14.538835   43976 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:36:14.539174   43976 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:36:14.539350   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:14.539518   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetState
	I1128 00:36:14.541285   43976 fix.go:102] recreateIfNeeded on stopped-upgrade-789586: state=Stopped err=<nil>
	I1128 00:36:14.541312   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	W1128 00:36:14.541493   43976 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:36:14.543822   43976 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-789586" ...
	I1128 00:36:14.545339   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .Start
	I1128 00:36:14.545515   43976 main.go:141] libmachine: (stopped-upgrade-789586) Ensuring networks are active...
	I1128 00:36:14.546309   43976 main.go:141] libmachine: (stopped-upgrade-789586) Ensuring network default is active
	I1128 00:36:14.546644   43976 main.go:141] libmachine: (stopped-upgrade-789586) Ensuring network minikube-net is active
	I1128 00:36:14.547071   43976 main.go:141] libmachine: (stopped-upgrade-789586) Getting domain xml...
	I1128 00:36:14.547850   43976 main.go:141] libmachine: (stopped-upgrade-789586) Creating domain...
	I1128 00:36:15.830123   43976 main.go:141] libmachine: (stopped-upgrade-789586) Waiting to get IP...
	I1128 00:36:15.831016   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:15.831479   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:15.831582   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:15.831464   44010 retry.go:31] will retry after 285.458367ms: waiting for machine to come up
	I1128 00:36:16.119011   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:16.119658   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:16.119688   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:16.119633   44010 retry.go:31] will retry after 359.425524ms: waiting for machine to come up
	I1128 00:36:16.480167   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:16.480863   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:16.480903   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:16.480818   44010 retry.go:31] will retry after 478.681871ms: waiting for machine to come up
	I1128 00:36:16.961484   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:16.961974   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:16.962006   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:16.961906   44010 retry.go:31] will retry after 463.064424ms: waiting for machine to come up
	I1128 00:36:17.426558   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:17.427058   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:17.427090   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:17.426999   44010 retry.go:31] will retry after 708.335902ms: waiting for machine to come up
	I1128 00:36:18.136849   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:18.137336   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:18.137380   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:18.137291   44010 retry.go:31] will retry after 943.558846ms: waiting for machine to come up
	I1128 00:36:19.082968   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:19.083506   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:19.083531   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:19.083469   44010 retry.go:31] will retry after 725.193253ms: waiting for machine to come up
	I1128 00:36:19.810902   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:19.811511   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:19.811546   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:19.811483   44010 retry.go:31] will retry after 1.131084628s: waiting for machine to come up
	I1128 00:36:20.944069   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:20.944610   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:20.944637   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:20.944558   44010 retry.go:31] will retry after 1.473350012s: waiting for machine to come up
	I1128 00:36:22.419160   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:22.419675   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:22.419697   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:22.419651   44010 retry.go:31] will retry after 1.558069837s: waiting for machine to come up
	I1128 00:36:23.980368   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:23.980929   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:23.980961   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:23.980880   44010 retry.go:31] will retry after 2.09239487s: waiting for machine to come up
	I1128 00:36:26.075570   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:26.076082   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:26.076106   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:26.076039   44010 retry.go:31] will retry after 2.593886584s: waiting for machine to come up
	I1128 00:36:28.671372   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:28.671994   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:28.672026   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:28.671933   44010 retry.go:31] will retry after 4.458712767s: waiting for machine to come up
	I1128 00:36:33.133805   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:33.134391   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:33.134425   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:33.134334   44010 retry.go:31] will retry after 5.472663162s: waiting for machine to come up
	I1128 00:36:38.609860   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:38.610358   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | unable to find current IP address of domain stopped-upgrade-789586 in network minikube-net
	I1128 00:36:38.610388   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | I1128 00:36:38.610314   44010 retry.go:31] will retry after 6.114457718s: waiting for machine to come up
	I1128 00:36:44.726394   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.726868   43976 main.go:141] libmachine: (stopped-upgrade-789586) Found IP for machine: 192.168.72.157
	I1128 00:36:44.726894   43976 main.go:141] libmachine: (stopped-upgrade-789586) Reserving static IP address...
	I1128 00:36:44.726912   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has current primary IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.727310   43976 main.go:141] libmachine: (stopped-upgrade-789586) Reserved static IP address: 192.168.72.157
	I1128 00:36:44.727352   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "stopped-upgrade-789586", mac: "52:54:00:d2:1a:6c", ip: "192.168.72.157"} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:44.727369   43976 main.go:141] libmachine: (stopped-upgrade-789586) Waiting for SSH to be available...
	I1128 00:36:44.727392   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-789586", mac: "52:54:00:d2:1a:6c", ip: "192.168.72.157"}
	I1128 00:36:44.727406   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | Getting to WaitForSSH function...
	I1128 00:36:44.729222   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.729500   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:44.729526   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.729627   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | Using SSH client type: external
	I1128 00:36:44.729661   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/stopped-upgrade-789586/id_rsa (-rw-------)
	I1128 00:36:44.729691   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/stopped-upgrade-789586/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:36:44.729707   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | About to run SSH command:
	I1128 00:36:44.729737   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | exit 0
	I1128 00:36:44.860270   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | SSH cmd err, output: <nil>: 
	I1128 00:36:44.860642   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetConfigRaw
	I1128 00:36:44.861261   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetIP
	I1128 00:36:44.863621   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.863959   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:44.863991   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.864215   43976 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/stopped-upgrade-789586/config.json ...
	I1128 00:36:44.864422   43976 machine.go:88] provisioning docker machine ...
	I1128 00:36:44.864446   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:44.864619   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetMachineName
	I1128 00:36:44.864799   43976 buildroot.go:166] provisioning hostname "stopped-upgrade-789586"
	I1128 00:36:44.864816   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetMachineName
	I1128 00:36:44.864970   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:44.867232   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.867612   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:44.867644   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.867746   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHPort
	I1128 00:36:44.867971   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:44.868121   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:44.868238   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHUsername
	I1128 00:36:44.868381   43976 main.go:141] libmachine: Using SSH client type: native
	I1128 00:36:44.868727   43976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I1128 00:36:44.868742   43976 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-789586 && echo "stopped-upgrade-789586" | sudo tee /etc/hostname
	I1128 00:36:44.991784   43976 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-789586
	
	I1128 00:36:44.991814   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:44.994355   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.994712   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:44.994743   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:44.994976   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHPort
	I1128 00:36:44.995202   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:44.995393   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:44.995567   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHUsername
	I1128 00:36:44.995739   43976 main.go:141] libmachine: Using SSH client type: native
	I1128 00:36:44.996063   43976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I1128 00:36:44.996095   43976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-789586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-789586/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-789586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:36:45.117251   43976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:36:45.117278   43976 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:36:45.117330   43976 buildroot.go:174] setting up certificates
	I1128 00:36:45.117339   43976 provision.go:83] configureAuth start
	I1128 00:36:45.117358   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetMachineName
	I1128 00:36:45.117647   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetIP
	I1128 00:36:45.120456   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:45.120894   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:45.120924   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:45.121134   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:45.123417   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:45.123757   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:45.123784   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:45.123890   43976 provision.go:138] copyHostCerts
	I1128 00:36:45.123952   43976 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:36:45.123963   43976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:36:45.124032   43976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:36:45.124133   43976 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:36:45.124143   43976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:36:45.124180   43976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:36:45.124250   43976 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:36:45.124261   43976 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:36:45.124292   43976 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:36:45.124353   43976 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-789586 san=[192.168.72.157 192.168.72.157 localhost 127.0.0.1 minikube stopped-upgrade-789586]
	I1128 00:36:45.329017   43976 provision.go:172] copyRemoteCerts
	I1128 00:36:45.329091   43976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:36:45.329121   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:45.331535   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:45.331832   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:45.331860   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:45.332025   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHPort
	I1128 00:36:45.332235   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:45.332405   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHUsername
	I1128 00:36:45.332661   43976 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/stopped-upgrade-789586/id_rsa Username:docker}
	I1128 00:36:45.415052   43976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:36:45.428628   43976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:36:45.441978   43976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:36:45.454614   43976 provision.go:86] duration metric: configureAuth took 337.265102ms
	I1128 00:36:45.454640   43976 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:36:45.454807   43976 config.go:182] Loaded profile config "stopped-upgrade-789586": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 00:36:45.454899   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:45.457640   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:45.457966   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:45.458007   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:45.458181   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHPort
	I1128 00:36:45.458390   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:45.458585   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:45.458736   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHUsername
	I1128 00:36:45.458894   43976 main.go:141] libmachine: Using SSH client type: native
	I1128 00:36:45.459349   43976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I1128 00:36:45.459370   43976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:36:53.589056   43976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:36:53.589083   43976 machine.go:91] provisioned docker machine in 8.724645586s
	I1128 00:36:53.589094   43976 start.go:300] post-start starting for "stopped-upgrade-789586" (driver="kvm2")
	I1128 00:36:53.589110   43976 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:36:53.589130   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:53.589448   43976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:36:53.589480   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:53.592231   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.592652   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:53.592686   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.592799   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHPort
	I1128 00:36:53.592979   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:53.593137   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHUsername
	I1128 00:36:53.593274   43976 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/stopped-upgrade-789586/id_rsa Username:docker}
	I1128 00:36:53.675452   43976 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:36:53.679878   43976 info.go:137] Remote host: Buildroot 2019.02.7
	I1128 00:36:53.679900   43976 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:36:53.679960   43976 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:36:53.680039   43976 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:36:53.680124   43976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:36:53.685838   43976 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:36:53.699606   43976 start.go:303] post-start completed in 110.499627ms
	I1128 00:36:53.699627   43976 fix.go:56] fixHost completed within 39.177198742s
	I1128 00:36:53.699645   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:53.702247   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.702591   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:53.702619   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.702782   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHPort
	I1128 00:36:53.702983   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:53.703111   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:53.703199   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHUsername
	I1128 00:36:53.703331   43976 main.go:141] libmachine: Using SSH client type: native
	I1128 00:36:53.703633   43976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I1128 00:36:53.703644   43976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1128 00:36:53.821149   43976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701131813.767281841
	
	I1128 00:36:53.821172   43976 fix.go:206] guest clock: 1701131813.767281841
	I1128 00:36:53.821179   43976 fix.go:219] Guest: 2023-11-28 00:36:53.767281841 +0000 UTC Remote: 2023-11-28 00:36:53.699630582 +0000 UTC m=+39.808476662 (delta=67.651259ms)
	I1128 00:36:53.821195   43976 fix.go:190] guest clock delta is within tolerance: 67.651259ms
	I1128 00:36:53.821199   43976 start.go:83] releasing machines lock for "stopped-upgrade-789586", held for 39.298786456s
	I1128 00:36:53.821217   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:53.821503   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetIP
	I1128 00:36:53.824239   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.824693   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:53.824713   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.824923   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:53.825418   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:53.825611   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .DriverName
	I1128 00:36:53.825683   43976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:36:53.825722   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:53.825852   43976 ssh_runner.go:195] Run: cat /version.json
	I1128 00:36:53.825885   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHHostname
	I1128 00:36:53.828473   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.828627   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.828841   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:53.828866   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.828931   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1a:6c", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2023-11-28 01:36:39 +0000 UTC Type:0 Mac:52:54:00:d2:1a:6c Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:stopped-upgrade-789586 Clientid:01:52:54:00:d2:1a:6c}
	I1128 00:36:53.828966   43976 main.go:141] libmachine: (stopped-upgrade-789586) DBG | domain stopped-upgrade-789586 has defined IP address 192.168.72.157 and MAC address 52:54:00:d2:1a:6c in network minikube-net
	I1128 00:36:53.829005   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHPort
	I1128 00:36:53.829162   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:53.829205   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHPort
	I1128 00:36:53.829375   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHUsername
	I1128 00:36:53.829382   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHKeyPath
	I1128 00:36:53.829542   43976 main.go:141] libmachine: (stopped-upgrade-789586) Calling .GetSSHUsername
	I1128 00:36:53.829543   43976 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/stopped-upgrade-789586/id_rsa Username:docker}
	I1128 00:36:53.829684   43976 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/stopped-upgrade-789586/id_rsa Username:docker}
	W1128 00:36:53.931188   43976 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1128 00:36:53.931253   43976 ssh_runner.go:195] Run: systemctl --version
	I1128 00:36:53.935723   43976 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:36:53.991802   43976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:36:53.997744   43976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:36:53.997822   43976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:36:54.003278   43976 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 00:36:54.003302   43976 start.go:472] detecting cgroup driver to use...
	I1128 00:36:54.003363   43976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:36:54.013181   43976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:36:54.021241   43976 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:36:54.021293   43976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:36:54.028540   43976 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:36:54.036596   43976 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1128 00:36:54.043832   43976 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1128 00:36:54.043922   43976 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:36:54.139091   43976 docker.go:219] disabling docker service ...
	I1128 00:36:54.139167   43976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:36:54.151681   43976 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:36:54.159617   43976 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:36:54.251758   43976 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:36:54.345267   43976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:36:54.353612   43976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:36:54.364400   43976 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 00:36:54.364468   43976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:36:54.372617   43976 out.go:177] 
	W1128 00:36:54.374026   43976 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1128 00:36:54.374051   43976 out.go:239] * 
	* 
	W1128 00:36:54.374930   43976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:36:54.376325   43976 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-789586 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (262.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-732472 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-732472 --alsologtostderr -v=3: exit status 82 (2m1.438146232s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-732472"  ...
	* Stopping node "old-k8s-version-732472"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:35:45.073755   43763 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:35:45.073908   43763 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:35:45.073921   43763 out.go:309] Setting ErrFile to fd 2...
	I1128 00:35:45.073929   43763 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:35:45.074106   43763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:35:45.074372   43763 out.go:303] Setting JSON to false
	I1128 00:35:45.074479   43763 mustload.go:65] Loading cluster: old-k8s-version-732472
	I1128 00:35:45.074841   43763 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:35:45.074925   43763 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/config.json ...
	I1128 00:35:45.075100   43763 mustload.go:65] Loading cluster: old-k8s-version-732472
	I1128 00:35:45.075228   43763 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:35:45.075270   43763 stop.go:39] StopHost: old-k8s-version-732472
	I1128 00:35:45.075749   43763 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:35:45.075799   43763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:35:45.089925   43763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I1128 00:35:45.090395   43763 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:35:45.090946   43763 main.go:141] libmachine: Using API Version  1
	I1128 00:35:45.090969   43763 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:35:45.091355   43763 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:35:45.093662   43763 out.go:177] * Stopping node "old-k8s-version-732472"  ...
	I1128 00:35:45.095105   43763 main.go:141] libmachine: Stopping "old-k8s-version-732472"...
	I1128 00:35:45.095142   43763 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:35:45.096741   43763 main.go:141] libmachine: (old-k8s-version-732472) Calling .Stop
	I1128 00:35:45.100276   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 0/60
	I1128 00:35:46.101950   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 1/60
	I1128 00:35:47.103637   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 2/60
	I1128 00:35:48.104928   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 3/60
	I1128 00:35:49.107402   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 4/60
	I1128 00:35:50.109942   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 5/60
	I1128 00:35:51.111619   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 6/60
	I1128 00:35:52.113207   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 7/60
	I1128 00:35:53.114779   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 8/60
	I1128 00:35:54.116152   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 9/60
	I1128 00:35:55.117600   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 10/60
	I1128 00:35:56.120011   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 11/60
	I1128 00:35:57.121578   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 12/60
	I1128 00:35:58.123399   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 13/60
	I1128 00:35:59.125902   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 14/60
	I1128 00:36:00.127978   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 15/60
	I1128 00:36:01.129559   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 16/60
	I1128 00:36:02.131284   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 17/60
	I1128 00:36:03.133327   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 18/60
	I1128 00:36:04.134880   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 19/60
	I1128 00:36:05.137278   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 20/60
	I1128 00:36:06.139626   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 21/60
	I1128 00:36:07.141098   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 22/60
	I1128 00:36:08.143539   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 23/60
	I1128 00:36:09.145137   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 24/60
	I1128 00:36:10.146931   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 25/60
	I1128 00:36:11.148708   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 26/60
	I1128 00:36:12.150313   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 27/60
	I1128 00:36:13.152153   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 28/60
	I1128 00:36:14.153899   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 29/60
	I1128 00:36:15.156336   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 30/60
	I1128 00:36:16.158577   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 31/60
	I1128 00:36:17.160164   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 32/60
	I1128 00:36:18.162104   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 33/60
	I1128 00:36:19.163558   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 34/60
	I1128 00:36:20.166073   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 35/60
	I1128 00:36:21.167523   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 36/60
	I1128 00:36:22.169661   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 37/60
	I1128 00:36:23.171482   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 38/60
	I1128 00:36:24.172939   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 39/60
	I1128 00:36:25.175077   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 40/60
	I1128 00:36:26.177278   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 41/60
	I1128 00:36:27.179435   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 42/60
	I1128 00:36:28.180811   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 43/60
	I1128 00:36:29.182379   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 44/60
	I1128 00:36:30.184433   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 45/60
	I1128 00:36:31.185946   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 46/60
	I1128 00:36:32.187247   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 47/60
	I1128 00:36:33.188775   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 48/60
	I1128 00:36:34.190935   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 49/60
	I1128 00:36:35.193232   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 50/60
	I1128 00:36:36.195354   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 51/60
	I1128 00:36:37.197034   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 52/60
	I1128 00:36:38.199430   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 53/60
	I1128 00:36:39.201209   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 54/60
	I1128 00:36:40.202511   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 55/60
	I1128 00:36:41.204228   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 56/60
	I1128 00:36:42.205524   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 57/60
	I1128 00:36:43.206649   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 58/60
	I1128 00:36:44.207920   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 59/60
	I1128 00:36:45.208614   43763 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 00:36:45.208655   43763 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:36:45.208674   43763 retry.go:31] will retry after 1.123670982s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:36:46.332812   43763 stop.go:39] StopHost: old-k8s-version-732472
	I1128 00:36:46.333136   43763 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:36:46.333181   43763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:36:46.347211   43763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44849
	I1128 00:36:46.347663   43763 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:36:46.348058   43763 main.go:141] libmachine: Using API Version  1
	I1128 00:36:46.348079   43763 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:36:46.348471   43763 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:36:46.350579   43763 out.go:177] * Stopping node "old-k8s-version-732472"  ...
	I1128 00:36:46.352012   43763 main.go:141] libmachine: Stopping "old-k8s-version-732472"...
	I1128 00:36:46.352024   43763 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:36:46.353474   43763 main.go:141] libmachine: (old-k8s-version-732472) Calling .Stop
	I1128 00:36:46.356682   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 0/60
	I1128 00:36:47.357756   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 1/60
	I1128 00:36:48.358928   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 2/60
	I1128 00:36:49.360162   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 3/60
	I1128 00:36:50.361628   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 4/60
	I1128 00:36:51.363286   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 5/60
	I1128 00:36:52.365128   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 6/60
	I1128 00:36:53.367169   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 7/60
	I1128 00:36:54.368441   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 8/60
	I1128 00:36:55.370431   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 9/60
	I1128 00:36:56.373274   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 10/60
	I1128 00:36:57.374684   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 11/60
	I1128 00:36:58.376674   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 12/60
	I1128 00:36:59.377956   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 13/60
	I1128 00:37:00.379548   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 14/60
	I1128 00:37:01.381173   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 15/60
	I1128 00:37:02.383383   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 16/60
	I1128 00:37:03.384864   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 17/60
	I1128 00:37:04.386319   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 18/60
	I1128 00:37:05.387732   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 19/60
	I1128 00:37:06.389403   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 20/60
	I1128 00:37:07.390973   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 21/60
	I1128 00:37:08.392401   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 22/60
	I1128 00:37:09.394077   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 23/60
	I1128 00:37:10.395356   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 24/60
	I1128 00:37:11.397125   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 25/60
	I1128 00:37:12.398477   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 26/60
	I1128 00:37:13.399682   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 27/60
	I1128 00:37:14.401189   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 28/60
	I1128 00:37:15.403044   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 29/60
	I1128 00:37:16.404651   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 30/60
	I1128 00:37:17.406138   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 31/60
	I1128 00:37:18.407369   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 32/60
	I1128 00:37:19.409649   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 33/60
	I1128 00:37:20.411170   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 34/60
	I1128 00:37:21.412770   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 35/60
	I1128 00:37:22.414099   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 36/60
	I1128 00:37:23.415543   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 37/60
	I1128 00:37:24.416834   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 38/60
	I1128 00:37:25.418365   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 39/60
	I1128 00:37:26.420033   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 40/60
	I1128 00:37:27.421478   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 41/60
	I1128 00:37:28.423112   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 42/60
	I1128 00:37:29.424375   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 43/60
	I1128 00:37:30.426187   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 44/60
	I1128 00:37:31.428509   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 45/60
	I1128 00:37:32.430016   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 46/60
	I1128 00:37:33.431306   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 47/60
	I1128 00:37:34.432844   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 48/60
	I1128 00:37:35.434134   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 49/60
	I1128 00:37:36.435940   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 50/60
	I1128 00:37:37.437135   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 51/60
	I1128 00:37:38.438412   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 52/60
	I1128 00:37:39.439753   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 53/60
	I1128 00:37:40.441641   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 54/60
	I1128 00:37:41.443297   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 55/60
	I1128 00:37:42.444725   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 56/60
	I1128 00:37:43.445903   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 57/60
	I1128 00:37:44.447106   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 58/60
	I1128 00:37:45.448341   43763 main.go:141] libmachine: (old-k8s-version-732472) Waiting for machine to stop 59/60
	I1128 00:37:46.449210   43763 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 00:37:46.449253   43763 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:37:46.451768   43763 out.go:177] 
	W1128 00:37:46.453314   43763 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 00:37:46.453329   43763 out.go:239] * 
	* 
	W1128 00:37:46.455854   43763 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:37:46.457265   43763 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-732472 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472: exit status 3 (18.446427015s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:38:04.905072   44956 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	E1128 00:38:04.905091   44956 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-732472" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-304541 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-304541 --alsologtostderr -v=3: exit status 82 (2m1.725922964s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-304541"  ...
	* Stopping node "embed-certs-304541"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:36:35.140406   44248 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:36:35.140540   44248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:36:35.140557   44248 out.go:309] Setting ErrFile to fd 2...
	I1128 00:36:35.140564   44248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:36:35.140865   44248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:36:35.141127   44248 out.go:303] Setting JSON to false
	I1128 00:36:35.141200   44248 mustload.go:65] Loading cluster: embed-certs-304541
	I1128 00:36:35.141525   44248 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:36:35.141596   44248 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/config.json ...
	I1128 00:36:35.141743   44248 mustload.go:65] Loading cluster: embed-certs-304541
	I1128 00:36:35.141842   44248 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:36:35.141864   44248 stop.go:39] StopHost: embed-certs-304541
	I1128 00:36:35.142317   44248 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:36:35.142381   44248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:36:35.157528   44248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I1128 00:36:35.157976   44248 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:36:35.158582   44248 main.go:141] libmachine: Using API Version  1
	I1128 00:36:35.158608   44248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:36:35.158930   44248 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:36:35.161831   44248 out.go:177] * Stopping node "embed-certs-304541"  ...
	I1128 00:36:35.163508   44248 main.go:141] libmachine: Stopping "embed-certs-304541"...
	I1128 00:36:35.163531   44248 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:36:35.165460   44248 main.go:141] libmachine: (embed-certs-304541) Calling .Stop
	I1128 00:36:35.169121   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 0/60
	I1128 00:36:36.172111   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 1/60
	I1128 00:36:37.173885   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 2/60
	I1128 00:36:38.175544   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 3/60
	I1128 00:36:39.177108   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 4/60
	I1128 00:36:40.179169   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 5/60
	I1128 00:36:41.180446   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 6/60
	I1128 00:36:42.181944   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 7/60
	I1128 00:36:43.183582   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 8/60
	I1128 00:36:44.184970   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 9/60
	I1128 00:36:45.186869   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 10/60
	I1128 00:36:46.188210   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 11/60
	I1128 00:36:47.189623   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 12/60
	I1128 00:36:48.191013   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 13/60
	I1128 00:36:49.192713   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 14/60
	I1128 00:36:50.194614   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 15/60
	I1128 00:36:51.195961   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 16/60
	I1128 00:36:52.197259   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 17/60
	I1128 00:36:53.198611   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 18/60
	I1128 00:36:54.200198   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 19/60
	I1128 00:36:55.202060   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 20/60
	I1128 00:36:56.203476   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 21/60
	I1128 00:36:57.205085   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 22/60
	I1128 00:36:58.207264   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 23/60
	I1128 00:36:59.208640   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 24/60
	I1128 00:37:00.210762   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 25/60
	I1128 00:37:01.212375   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 26/60
	I1128 00:37:02.213864   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 27/60
	I1128 00:37:03.215435   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 28/60
	I1128 00:37:04.216799   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 29/60
	I1128 00:37:05.218153   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 30/60
	I1128 00:37:06.220026   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 31/60
	I1128 00:37:07.222314   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 32/60
	I1128 00:37:08.223706   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 33/60
	I1128 00:37:09.225274   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 34/60
	I1128 00:37:10.227386   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 35/60
	I1128 00:37:11.228953   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 36/60
	I1128 00:37:12.230223   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 37/60
	I1128 00:37:13.231621   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 38/60
	I1128 00:37:14.232996   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 39/60
	I1128 00:37:15.235111   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 40/60
	I1128 00:37:16.236580   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 41/60
	I1128 00:37:17.238223   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 42/60
	I1128 00:37:18.239505   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 43/60
	I1128 00:37:19.241118   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 44/60
	I1128 00:37:20.242888   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 45/60
	I1128 00:37:21.244181   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 46/60
	I1128 00:37:22.245699   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 47/60
	I1128 00:37:23.247334   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 48/60
	I1128 00:37:24.249258   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 49/60
	I1128 00:37:25.250972   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 50/60
	I1128 00:37:26.252883   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 51/60
	I1128 00:37:27.254096   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 52/60
	I1128 00:37:28.255661   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 53/60
	I1128 00:37:29.257005   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 54/60
	I1128 00:37:30.258798   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 55/60
	I1128 00:37:31.260337   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 56/60
	I1128 00:37:32.261769   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 57/60
	I1128 00:37:33.263351   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 58/60
	I1128 00:37:34.264955   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 59/60
	I1128 00:37:35.266251   44248 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 00:37:35.266304   44248 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:37:35.266325   44248 retry.go:31] will retry after 1.419854249s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:37:36.686572   44248 stop.go:39] StopHost: embed-certs-304541
	I1128 00:37:36.686942   44248 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:37:36.686995   44248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:37:36.702013   44248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I1128 00:37:36.702493   44248 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:37:36.703067   44248 main.go:141] libmachine: Using API Version  1
	I1128 00:37:36.703095   44248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:37:36.703406   44248 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:37:36.705267   44248 out.go:177] * Stopping node "embed-certs-304541"  ...
	I1128 00:37:36.706503   44248 main.go:141] libmachine: Stopping "embed-certs-304541"...
	I1128 00:37:36.706522   44248 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:37:36.708113   44248 main.go:141] libmachine: (embed-certs-304541) Calling .Stop
	I1128 00:37:36.711158   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 0/60
	I1128 00:37:37.712676   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 1/60
	I1128 00:37:38.713830   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 2/60
	I1128 00:37:39.715287   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 3/60
	I1128 00:37:40.716652   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 4/60
	I1128 00:37:41.718535   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 5/60
	I1128 00:37:42.719913   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 6/60
	I1128 00:37:43.721211   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 7/60
	I1128 00:37:44.722490   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 8/60
	I1128 00:37:45.724028   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 9/60
	I1128 00:37:46.725571   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 10/60
	I1128 00:37:47.726784   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 11/60
	I1128 00:37:48.728101   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 12/60
	I1128 00:37:49.729451   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 13/60
	I1128 00:37:50.731450   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 14/60
	I1128 00:37:51.733726   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 15/60
	I1128 00:37:52.735046   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 16/60
	I1128 00:37:53.736308   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 17/60
	I1128 00:37:54.737592   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 18/60
	I1128 00:37:55.738789   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 19/60
	I1128 00:37:56.740623   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 20/60
	I1128 00:37:57.742272   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 21/60
	I1128 00:37:58.743823   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 22/60
	I1128 00:37:59.745070   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 23/60
	I1128 00:38:00.746258   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 24/60
	I1128 00:38:01.747774   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 25/60
	I1128 00:38:02.749098   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 26/60
	I1128 00:38:03.750556   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 27/60
	I1128 00:38:04.751737   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 28/60
	I1128 00:38:05.753263   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 29/60
	I1128 00:38:06.755068   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 30/60
	I1128 00:38:07.756180   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 31/60
	I1128 00:38:08.757383   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 32/60
	I1128 00:38:09.758981   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 33/60
	I1128 00:38:10.760263   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 34/60
	I1128 00:38:11.761704   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 35/60
	I1128 00:38:12.762850   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 36/60
	I1128 00:38:13.763993   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 37/60
	I1128 00:38:14.765243   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 38/60
	I1128 00:38:15.767122   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 39/60
	I1128 00:38:16.768742   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 40/60
	I1128 00:38:17.770002   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 41/60
	I1128 00:38:18.771191   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 42/60
	I1128 00:38:19.772427   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 43/60
	I1128 00:38:20.773633   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 44/60
	I1128 00:38:21.775236   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 45/60
	I1128 00:38:22.776576   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 46/60
	I1128 00:38:23.777936   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 47/60
	I1128 00:38:24.779303   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 48/60
	I1128 00:38:25.780570   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 49/60
	I1128 00:38:26.782277   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 50/60
	I1128 00:38:27.783559   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 51/60
	I1128 00:38:28.785086   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 52/60
	I1128 00:38:29.787304   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 53/60
	I1128 00:38:30.788656   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 54/60
	I1128 00:38:31.790653   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 55/60
	I1128 00:38:32.792198   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 56/60
	I1128 00:38:33.793623   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 57/60
	I1128 00:38:34.795649   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 58/60
	I1128 00:38:35.796917   44248 main.go:141] libmachine: (embed-certs-304541) Waiting for machine to stop 59/60
	I1128 00:38:36.797950   44248 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 00:38:36.797992   44248 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:38:36.799861   44248 out.go:177] 
	W1128 00:38:36.801468   44248 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 00:38:36.801489   44248 out.go:239] * 
	* 
	W1128 00:38:36.803720   44248 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:38:36.805249   44248 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-304541 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541
E1128 00:38:50.988525   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541: exit status 3 (18.530450664s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:38:55.337051   45381 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.93:22: connect: no route to host
	E1128 00:38:55.337072   45381 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.93:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-304541" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-473615 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-473615 --alsologtostderr -v=3: exit status 82 (2m1.043756533s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-473615"  ...
	* Stopping node "no-preload-473615"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:37:06.839194   44749 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:37:06.839325   44749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:37:06.839336   44749 out.go:309] Setting ErrFile to fd 2...
	I1128 00:37:06.839343   44749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:37:06.839557   44749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:37:06.839812   44749 out.go:303] Setting JSON to false
	I1128 00:37:06.839890   44749 mustload.go:65] Loading cluster: no-preload-473615
	I1128 00:37:06.840272   44749 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:37:06.840368   44749 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/config.json ...
	I1128 00:37:06.840548   44749 mustload.go:65] Loading cluster: no-preload-473615
	I1128 00:37:06.840669   44749 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:37:06.840694   44749 stop.go:39] StopHost: no-preload-473615
	I1128 00:37:06.841066   44749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:37:06.841136   44749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:37:06.855528   44749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I1128 00:37:06.855979   44749 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:37:06.856617   44749 main.go:141] libmachine: Using API Version  1
	I1128 00:37:06.856649   44749 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:37:06.857051   44749 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:37:06.859485   44749 out.go:177] * Stopping node "no-preload-473615"  ...
	I1128 00:37:06.861138   44749 main.go:141] libmachine: Stopping "no-preload-473615"...
	I1128 00:37:06.861167   44749 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:37:06.862888   44749 main.go:141] libmachine: (no-preload-473615) Calling .Stop
	I1128 00:37:06.866345   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 0/60
	I1128 00:37:07.867731   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 1/60
	I1128 00:37:08.869274   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 2/60
	I1128 00:37:09.871622   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 3/60
	I1128 00:37:10.872872   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 4/60
	I1128 00:37:11.874782   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 5/60
	I1128 00:37:12.876275   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 6/60
	I1128 00:37:13.877651   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 7/60
	I1128 00:37:14.878891   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 8/60
	I1128 00:37:15.880406   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 9/60
	I1128 00:37:16.882949   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 10/60
	I1128 00:37:17.884362   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 11/60
	I1128 00:37:18.885808   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 12/60
	I1128 00:37:19.887137   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 13/60
	I1128 00:37:20.888413   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 14/60
	I1128 00:37:21.890581   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 15/60
	I1128 00:37:22.892024   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 16/60
	I1128 00:37:23.893755   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 17/60
	I1128 00:37:24.895148   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 18/60
	I1128 00:37:25.896600   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 19/60
	I1128 00:37:26.898746   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 20/60
	I1128 00:37:27.901248   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 21/60
	I1128 00:37:28.902821   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 22/60
	I1128 00:37:29.904138   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 23/60
	I1128 00:37:30.905633   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 24/60
	I1128 00:37:31.906926   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 25/60
	I1128 00:37:32.908210   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 26/60
	I1128 00:37:33.909520   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 27/60
	I1128 00:37:34.911022   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 28/60
	I1128 00:37:35.912393   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 29/60
	I1128 00:37:36.914644   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 30/60
	I1128 00:37:37.916319   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 31/60
	I1128 00:37:38.917569   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 32/60
	I1128 00:37:39.919211   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 33/60
	I1128 00:37:40.920484   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 34/60
	I1128 00:37:41.922595   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 35/60
	I1128 00:37:42.923815   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 36/60
	I1128 00:37:43.925164   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 37/60
	I1128 00:37:44.926299   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 38/60
	I1128 00:37:45.927736   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 39/60
	I1128 00:37:46.929733   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 40/60
	I1128 00:37:47.931025   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 41/60
	I1128 00:37:48.932386   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 42/60
	I1128 00:37:49.933730   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 43/60
	I1128 00:37:50.935069   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 44/60
	I1128 00:37:51.937453   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 45/60
	I1128 00:37:52.939366   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 46/60
	I1128 00:37:53.940516   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 47/60
	I1128 00:37:54.941958   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 48/60
	I1128 00:37:55.943056   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 49/60
	I1128 00:37:56.944963   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 50/60
	I1128 00:37:57.946151   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 51/60
	I1128 00:37:58.948594   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 52/60
	I1128 00:37:59.949804   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 53/60
	I1128 00:38:00.951134   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 54/60
	I1128 00:38:01.952824   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 55/60
	I1128 00:38:02.954057   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 56/60
	I1128 00:38:03.955438   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 57/60
	I1128 00:38:04.956868   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 58/60
	I1128 00:38:05.958226   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 59/60
	I1128 00:38:06.959374   44749 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 00:38:06.959424   44749 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:38:06.959441   44749 retry.go:31] will retry after 742.141266ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:38:07.701875   44749 stop.go:39] StopHost: no-preload-473615
	I1128 00:38:07.702275   44749 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:38:07.702325   44749 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:38:07.717399   44749 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46733
	I1128 00:38:07.717775   44749 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:38:07.718234   44749 main.go:141] libmachine: Using API Version  1
	I1128 00:38:07.718257   44749 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:38:07.718581   44749 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:38:07.720488   44749 out.go:177] * Stopping node "no-preload-473615"  ...
	I1128 00:38:07.721935   44749 main.go:141] libmachine: Stopping "no-preload-473615"...
	I1128 00:38:07.721959   44749 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:38:07.723377   44749 main.go:141] libmachine: (no-preload-473615) Calling .Stop
	I1128 00:38:07.726640   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 0/60
	I1128 00:38:08.728131   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 1/60
	I1128 00:38:09.729544   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 2/60
	I1128 00:38:10.730783   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 3/60
	I1128 00:38:11.732012   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 4/60
	I1128 00:38:12.733198   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 5/60
	I1128 00:38:13.734557   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 6/60
	I1128 00:38:14.735841   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 7/60
	I1128 00:38:15.737127   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 8/60
	I1128 00:38:16.738374   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 9/60
	I1128 00:38:17.739686   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 10/60
	I1128 00:38:18.741105   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 11/60
	I1128 00:38:19.742405   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 12/60
	I1128 00:38:20.743699   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 13/60
	I1128 00:38:21.745010   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 14/60
	I1128 00:38:22.746168   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 15/60
	I1128 00:38:23.747545   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 16/60
	I1128 00:38:24.748820   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 17/60
	I1128 00:38:25.750337   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 18/60
	I1128 00:38:26.751537   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 19/60
	I1128 00:38:27.753199   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 20/60
	I1128 00:38:28.754665   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 21/60
	I1128 00:38:29.756189   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 22/60
	I1128 00:38:30.757684   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 23/60
	I1128 00:38:31.759055   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 24/60
	I1128 00:38:32.760851   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 25/60
	I1128 00:38:33.762343   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 26/60
	I1128 00:38:34.763806   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 27/60
	I1128 00:38:35.765489   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 28/60
	I1128 00:38:36.767094   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 29/60
	I1128 00:38:37.768745   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 30/60
	I1128 00:38:38.770642   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 31/60
	I1128 00:38:39.772263   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 32/60
	I1128 00:38:40.773911   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 33/60
	I1128 00:38:41.775274   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 34/60
	I1128 00:38:42.776583   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 35/60
	I1128 00:38:43.778018   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 36/60
	I1128 00:38:44.779531   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 37/60
	I1128 00:38:45.781268   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 38/60
	I1128 00:38:46.783346   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 39/60
	I1128 00:38:47.785132   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 40/60
	I1128 00:38:48.786526   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 41/60
	I1128 00:38:49.787912   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 42/60
	I1128 00:38:50.789408   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 43/60
	I1128 00:38:51.790845   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 44/60
	I1128 00:38:52.792370   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 45/60
	I1128 00:38:53.793874   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 46/60
	I1128 00:38:54.795302   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 47/60
	I1128 00:38:55.796636   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 48/60
	I1128 00:38:56.798181   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 49/60
	I1128 00:38:57.799799   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 50/60
	I1128 00:38:58.801286   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 51/60
	I1128 00:38:59.802496   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 52/60
	I1128 00:39:00.804094   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 53/60
	I1128 00:39:01.805633   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 54/60
	I1128 00:39:02.807300   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 55/60
	I1128 00:39:03.808850   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 56/60
	I1128 00:39:04.810311   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 57/60
	I1128 00:39:05.811712   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 58/60
	I1128 00:39:06.813242   44749 main.go:141] libmachine: (no-preload-473615) Waiting for machine to stop 59/60
	I1128 00:39:07.814653   44749 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 00:39:07.814700   44749 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:39:07.816562   44749 out.go:177] 
	W1128 00:39:07.818048   44749 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 00:39:07.818063   44749 out.go:239] * 
	* 
	W1128 00:39:07.820437   44749 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:39:07.821799   44749 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-473615 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615: exit status 3 (18.489152823s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:39:26.313008   45605 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.195:22: connect: no route to host
	E1128 00:39:26.313035   45605 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.195:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-473615" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472: exit status 3 (3.20112335s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:38:08.105079   45071 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	E1128 00:38:08.105115   45071 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-732472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-732472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152804594s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-732472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472: exit status 3 (3.062621259s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:38:17.321114   45227 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	E1128 00:38:17.321133   45227 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-732472" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-488423 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-488423 --alsologtostderr -v=3: exit status 82 (2m1.287369352s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-488423"  ...
	* Stopping node "default-k8s-diff-port-488423"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:38:10.133050   45198 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:38:10.133294   45198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:38:10.133302   45198 out.go:309] Setting ErrFile to fd 2...
	I1128 00:38:10.133307   45198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:38:10.133475   45198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:38:10.133703   45198 out.go:303] Setting JSON to false
	I1128 00:38:10.133776   45198 mustload.go:65] Loading cluster: default-k8s-diff-port-488423
	I1128 00:38:10.134090   45198 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:38:10.134153   45198 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:38:10.134313   45198 mustload.go:65] Loading cluster: default-k8s-diff-port-488423
	I1128 00:38:10.134416   45198 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:38:10.134447   45198 stop.go:39] StopHost: default-k8s-diff-port-488423
	I1128 00:38:10.134853   45198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:38:10.134896   45198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:38:10.149053   45198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37641
	I1128 00:38:10.149536   45198 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:38:10.150199   45198 main.go:141] libmachine: Using API Version  1
	I1128 00:38:10.150225   45198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:38:10.150589   45198 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:38:10.152855   45198 out.go:177] * Stopping node "default-k8s-diff-port-488423"  ...
	I1128 00:38:10.154229   45198 main.go:141] libmachine: Stopping "default-k8s-diff-port-488423"...
	I1128 00:38:10.154252   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:38:10.155884   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Stop
	I1128 00:38:10.158994   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 0/60
	I1128 00:38:11.160505   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 1/60
	I1128 00:38:12.161810   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 2/60
	I1128 00:38:13.163148   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 3/60
	I1128 00:38:14.164492   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 4/60
	I1128 00:38:15.166241   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 5/60
	I1128 00:38:16.167769   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 6/60
	I1128 00:38:17.169014   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 7/60
	I1128 00:38:18.170380   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 8/60
	I1128 00:38:19.171632   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 9/60
	I1128 00:38:20.173613   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 10/60
	I1128 00:38:21.175121   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 11/60
	I1128 00:38:22.176471   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 12/60
	I1128 00:38:23.177796   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 13/60
	I1128 00:38:24.179063   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 14/60
	I1128 00:38:25.180805   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 15/60
	I1128 00:38:26.182359   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 16/60
	I1128 00:38:27.183630   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 17/60
	I1128 00:38:28.185085   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 18/60
	I1128 00:38:29.186508   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 19/60
	I1128 00:38:30.188465   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 20/60
	I1128 00:38:31.189969   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 21/60
	I1128 00:38:32.191428   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 22/60
	I1128 00:38:33.192845   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 23/60
	I1128 00:38:34.194152   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 24/60
	I1128 00:38:35.196032   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 25/60
	I1128 00:38:36.197466   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 26/60
	I1128 00:38:37.199144   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 27/60
	I1128 00:38:38.201630   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 28/60
	I1128 00:38:39.202894   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 29/60
	I1128 00:38:40.205317   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 30/60
	I1128 00:38:41.206837   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 31/60
	I1128 00:38:42.208324   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 32/60
	I1128 00:38:43.209720   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 33/60
	I1128 00:38:44.210998   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 34/60
	I1128 00:38:45.212742   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 35/60
	I1128 00:38:46.214185   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 36/60
	I1128 00:38:47.215775   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 37/60
	I1128 00:38:48.217382   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 38/60
	I1128 00:38:49.218888   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 39/60
	I1128 00:38:50.221132   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 40/60
	I1128 00:38:51.222652   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 41/60
	I1128 00:38:52.224123   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 42/60
	I1128 00:38:53.225826   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 43/60
	I1128 00:38:54.227289   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 44/60
	I1128 00:38:55.229336   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 45/60
	I1128 00:38:56.230675   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 46/60
	I1128 00:38:57.232150   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 47/60
	I1128 00:38:58.233495   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 48/60
	I1128 00:38:59.234856   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 49/60
	I1128 00:39:00.237119   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 50/60
	I1128 00:39:01.238738   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 51/60
	I1128 00:39:02.240146   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 52/60
	I1128 00:39:03.241739   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 53/60
	I1128 00:39:04.243324   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 54/60
	I1128 00:39:05.245298   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 55/60
	I1128 00:39:06.246850   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 56/60
	I1128 00:39:07.248204   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 57/60
	I1128 00:39:08.249675   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 58/60
	I1128 00:39:09.251185   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 59/60
	I1128 00:39:10.252533   45198 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 00:39:10.252601   45198 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:39:10.252625   45198 retry.go:31] will retry after 989.524804ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:39:11.242715   45198 stop.go:39] StopHost: default-k8s-diff-port-488423
	I1128 00:39:11.243095   45198 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:39:11.243136   45198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:39:11.257503   45198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I1128 00:39:11.257897   45198 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:39:11.258336   45198 main.go:141] libmachine: Using API Version  1
	I1128 00:39:11.258359   45198 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:39:11.258681   45198 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:39:11.261116   45198 out.go:177] * Stopping node "default-k8s-diff-port-488423"  ...
	I1128 00:39:11.262278   45198 main.go:141] libmachine: Stopping "default-k8s-diff-port-488423"...
	I1128 00:39:11.262291   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:39:11.263749   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Stop
	I1128 00:39:11.266919   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 0/60
	I1128 00:39:12.268286   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 1/60
	I1128 00:39:13.269630   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 2/60
	I1128 00:39:14.271094   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 3/60
	I1128 00:39:15.272402   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 4/60
	I1128 00:39:16.274175   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 5/60
	I1128 00:39:17.275626   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 6/60
	I1128 00:39:18.276948   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 7/60
	I1128 00:39:19.278418   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 8/60
	I1128 00:39:20.279777   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 9/60
	I1128 00:39:21.281845   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 10/60
	I1128 00:39:22.283070   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 11/60
	I1128 00:39:23.284794   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 12/60
	I1128 00:39:24.285989   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 13/60
	I1128 00:39:25.287569   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 14/60
	I1128 00:39:26.289384   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 15/60
	I1128 00:39:27.290564   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 16/60
	I1128 00:39:28.291959   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 17/60
	I1128 00:39:29.293191   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 18/60
	I1128 00:39:30.294538   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 19/60
	I1128 00:39:31.296360   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 20/60
	I1128 00:39:32.297577   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 21/60
	I1128 00:39:33.298864   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 22/60
	I1128 00:39:34.300090   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 23/60
	I1128 00:39:35.301536   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 24/60
	I1128 00:39:36.303358   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 25/60
	I1128 00:39:37.304717   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 26/60
	I1128 00:39:38.306383   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 27/60
	I1128 00:39:39.307856   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 28/60
	I1128 00:39:40.309236   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 29/60
	I1128 00:39:41.311036   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 30/60
	I1128 00:39:42.312479   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 31/60
	I1128 00:39:43.313831   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 32/60
	I1128 00:39:44.315559   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 33/60
	I1128 00:39:45.317079   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 34/60
	I1128 00:39:46.319133   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 35/60
	I1128 00:39:47.320568   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 36/60
	I1128 00:39:48.322035   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 37/60
	I1128 00:39:49.323695   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 38/60
	I1128 00:39:50.325226   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 39/60
	I1128 00:39:51.327615   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 40/60
	I1128 00:39:52.329157   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 41/60
	I1128 00:39:53.330465   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 42/60
	I1128 00:39:54.331968   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 43/60
	I1128 00:39:55.333233   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 44/60
	I1128 00:39:56.335154   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 45/60
	I1128 00:39:57.336543   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 46/60
	I1128 00:39:58.337794   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 47/60
	I1128 00:39:59.339351   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 48/60
	I1128 00:40:00.340573   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 49/60
	I1128 00:40:01.342211   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 50/60
	I1128 00:40:02.343640   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 51/60
	I1128 00:40:03.344884   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 52/60
	I1128 00:40:04.346193   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 53/60
	I1128 00:40:05.347712   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 54/60
	I1128 00:40:06.349507   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 55/60
	I1128 00:40:07.350867   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 56/60
	I1128 00:40:08.352355   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 57/60
	I1128 00:40:09.353590   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 58/60
	I1128 00:40:10.355128   45198 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for machine to stop 59/60
	I1128 00:40:11.356162   45198 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 00:40:11.356215   45198 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 00:40:11.358327   45198 out.go:177] 
	W1128 00:40:11.359796   45198 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 00:40:11.359811   45198 out.go:239] * 
	* 
	W1128 00:40:11.362204   45198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:40:11.363552   45198 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-488423 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
E1128 00:40:27.681511   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423: exit status 3 (18.436534136s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:40:29.801169   45938 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.242:22: connect: no route to host
	E1128 00:40:29.801193   45938 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.242:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488423" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541: exit status 3 (3.167698891s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:38:58.505051   45470 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.93:22: connect: no route to host
	E1128 00:38:58.505093   45470 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.93:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-304541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-304541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152541823s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.93:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-304541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541: exit status 3 (3.063272259s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:39:07.721176   45539 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.93:22: connect: no route to host
	E1128 00:39:07.721206   45539 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.93:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-304541" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615: exit status 3 (3.168117153s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:39:29.481111   45687 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.195:22: connect: no route to host
	E1128 00:39:29.481144   45687 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.195:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-473615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-473615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152790997s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.195:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-473615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615: exit status 3 (3.063021788s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:39:38.697086   45757 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.195:22: connect: no route to host
	E1128 00:39:38.697109   45757 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.195:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-473615" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423: exit status 3 (3.167318637s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:40:32.969089   46014 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.242:22: connect: no route to host
	E1128 00:40:32.969116   46014 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.242:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-488423 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-488423 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152274255s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.242:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-488423 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423: exit status 3 (3.063624769s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:40:42.185188   46085 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.242:22: connect: no route to host
	E1128 00:40:42.185212   46085 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.242:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-488423" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304541 -n embed-certs-304541
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-28 00:57:38.684452884 +0000 UTC m=+5560.249479508
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-304541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-304541 logs -n 25: (1.572585508s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-188325                                 | cert-options-188325          | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:33 UTC |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-732472        | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-304541            | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-001086 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | disable-driver-mounts-001086                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:37 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473615             | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC | 28 Nov 23 00:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-732472             | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-488423  | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-304541                 | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473615                  | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-488423       | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC | 28 Nov 23 00:48 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 00:40:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 00:40:42.238362   46126 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:40:42.238498   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238513   46126 out.go:309] Setting ErrFile to fd 2...
	I1128 00:40:42.238520   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238712   46126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:40:42.239236   46126 out.go:303] Setting JSON to false
	I1128 00:40:42.240138   46126 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4989,"bootTime":1701127053,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:40:42.240194   46126 start.go:138] virtualization: kvm guest
	I1128 00:40:42.242505   46126 out.go:177] * [default-k8s-diff-port-488423] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:40:42.243937   46126 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:40:42.243990   46126 notify.go:220] Checking for updates...
	I1128 00:40:42.245317   46126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:40:42.246717   46126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:40:42.248096   46126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:40:42.249294   46126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:40:42.250596   46126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:40:42.252296   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:40:42.252793   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.252854   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.267605   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I1128 00:40:42.267958   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.268457   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.268479   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.268774   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.268971   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.269215   46126 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:40:42.269470   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.269501   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.283984   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I1128 00:40:42.284338   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.284786   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.284808   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.285077   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.285263   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.319077   46126 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 00:40:42.320321   46126 start.go:298] selected driver: kvm2
	I1128 00:40:42.320332   46126 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.320481   46126 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:40:42.321242   46126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.321325   46126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 00:40:42.335477   46126 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 00:40:42.335818   46126 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 00:40:42.335887   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:40:42.335907   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:40:42.335922   46126 start_flags.go:323] config:
	{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-48842
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.336092   46126 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.337823   46126 out.go:177] * Starting control plane node default-k8s-diff-port-488423 in cluster default-k8s-diff-port-488423
	I1128 00:40:40.713025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:42.338980   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:40:42.339010   46126 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 00:40:42.339024   46126 cache.go:56] Caching tarball of preloaded images
	I1128 00:40:42.339105   46126 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 00:40:42.339117   46126 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 00:40:42.339232   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:40:42.339416   46126 start.go:365] acquiring machines lock for default-k8s-diff-port-488423: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:40:43.785024   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:49.865013   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:52.936964   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:59.017058   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:02.089017   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:08.169021   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:11.241040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:17.321032   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:20.393000   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:26.473039   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:29.544989   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:35.625074   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:38.697020   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:44.777040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:47.849040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:53.929055   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:57.001005   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:03.081016   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:06.153078   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:12.233029   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:15.305165   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:21.385067   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:24.457038   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:30.537025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:33.608998   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:39.689061   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:42.761012   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:48.841003   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:51.912985   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:54.916816   45580 start.go:369] acquired machines lock for "embed-certs-304541" in 3m47.030120592s
	I1128 00:42:54.916877   45580 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:42:54.916890   45580 fix.go:54] fixHost starting: 
	I1128 00:42:54.917233   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:42:54.917266   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:42:54.932296   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1128 00:42:54.932712   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:42:54.933230   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:42:54.933254   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:42:54.933574   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:42:54.933837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:42:54.934006   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:42:54.935712   45580 fix.go:102] recreateIfNeeded on embed-certs-304541: state=Stopped err=<nil>
	I1128 00:42:54.935737   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	W1128 00:42:54.935937   45580 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:42:54.937893   45580 out.go:177] * Restarting existing kvm2 VM for "embed-certs-304541" ...
	I1128 00:42:54.914751   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:42:54.914794   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:42:54.916666   45269 machine.go:91] provisioned docker machine in 4m37.413850055s
	I1128 00:42:54.916713   45269 fix.go:56] fixHost completed within 4m37.433506318s
	I1128 00:42:54.916719   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 4m37.433526985s
	W1128 00:42:54.916738   45269 start.go:691] error starting host: provision: host is not running
	W1128 00:42:54.916844   45269 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 00:42:54.916854   45269 start.go:706] Will try again in 5 seconds ...
	I1128 00:42:54.939120   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Start
	I1128 00:42:54.939284   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring networks are active...
	I1128 00:42:54.940122   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network default is active
	I1128 00:42:54.940636   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network mk-embed-certs-304541 is active
	I1128 00:42:54.941025   45580 main.go:141] libmachine: (embed-certs-304541) Getting domain xml...
	I1128 00:42:54.941883   45580 main.go:141] libmachine: (embed-certs-304541) Creating domain...
	I1128 00:42:56.157644   45580 main.go:141] libmachine: (embed-certs-304541) Waiting to get IP...
	I1128 00:42:56.158479   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.158803   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.158888   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.158791   46474 retry.go:31] will retry after 235.266272ms: waiting for machine to come up
	I1128 00:42:56.395238   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.395630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.395664   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.395579   46474 retry.go:31] will retry after 352.110542ms: waiting for machine to come up
	I1128 00:42:56.749150   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.749542   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.749570   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.749500   46474 retry.go:31] will retry after 364.122623ms: waiting for machine to come up
	I1128 00:42:57.115054   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.115497   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.115526   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.115450   46474 retry.go:31] will retry after 583.197763ms: waiting for machine to come up
	I1128 00:42:57.700134   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.700551   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.700577   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.700497   46474 retry.go:31] will retry after 515.615548ms: waiting for machine to come up
	I1128 00:42:59.917964   45269 start.go:365] acquiring machines lock for old-k8s-version-732472: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:42:58.218252   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.218630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.218668   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.218611   46474 retry.go:31] will retry after 690.258077ms: waiting for machine to come up
	I1128 00:42:58.910090   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.910438   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.910464   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.910413   46474 retry.go:31] will retry after 737.779074ms: waiting for machine to come up
	I1128 00:42:59.649308   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:59.649634   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:59.649661   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:59.649609   46474 retry.go:31] will retry after 1.23938471s: waiting for machine to come up
	I1128 00:43:00.890867   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:00.891318   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:00.891356   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:00.891298   46474 retry.go:31] will retry after 1.475598535s: waiting for machine to come up
	I1128 00:43:02.368630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:02.369159   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:02.369189   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:02.369085   46474 retry.go:31] will retry after 2.323321s: waiting for machine to come up
	I1128 00:43:04.695735   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:04.696175   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:04.696208   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:04.696131   46474 retry.go:31] will retry after 1.903335453s: waiting for machine to come up
	I1128 00:43:06.601229   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:06.601657   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:06.601687   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:06.601612   46474 retry.go:31] will retry after 2.205948796s: waiting for machine to come up
	I1128 00:43:08.809792   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:08.810161   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:08.810188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:08.810149   46474 retry.go:31] will retry after 3.31430253s: waiting for machine to come up
	I1128 00:43:12.126852   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:12.127294   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:12.127323   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:12.127249   46474 retry.go:31] will retry after 3.492216742s: waiting for machine to come up
	I1128 00:43:16.981905   45815 start.go:369] acquired machines lock for "no-preload-473615" in 3m38.128436656s
	I1128 00:43:16.981988   45815 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:16.982000   45815 fix.go:54] fixHost starting: 
	I1128 00:43:16.982400   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:16.982434   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:17.001935   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I1128 00:43:17.002390   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:17.002899   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:43:17.002930   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:17.003303   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:17.003515   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:17.003658   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:43:17.005243   45815 fix.go:102] recreateIfNeeded on no-preload-473615: state=Stopped err=<nil>
	I1128 00:43:17.005273   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	W1128 00:43:17.005442   45815 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:17.007831   45815 out.go:177] * Restarting existing kvm2 VM for "no-preload-473615" ...
	I1128 00:43:15.620590   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621046   45580 main.go:141] libmachine: (embed-certs-304541) Found IP for machine: 192.168.50.93
	I1128 00:43:15.621071   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has current primary IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621083   45580 main.go:141] libmachine: (embed-certs-304541) Reserving static IP address...
	I1128 00:43:15.621440   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.621473   45580 main.go:141] libmachine: (embed-certs-304541) DBG | skip adding static IP to network mk-embed-certs-304541 - found existing host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"}
	I1128 00:43:15.621484   45580 main.go:141] libmachine: (embed-certs-304541) Reserved static IP address: 192.168.50.93
	I1128 00:43:15.621498   45580 main.go:141] libmachine: (embed-certs-304541) Waiting for SSH to be available...
	I1128 00:43:15.621516   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Getting to WaitForSSH function...
	I1128 00:43:15.623594   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623865   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.623897   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623968   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH client type: external
	I1128 00:43:15.623989   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa (-rw-------)
	I1128 00:43:15.624044   45580 main.go:141] libmachine: (embed-certs-304541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:15.624057   45580 main.go:141] libmachine: (embed-certs-304541) DBG | About to run SSH command:
	I1128 00:43:15.624068   45580 main.go:141] libmachine: (embed-certs-304541) DBG | exit 0
	I1128 00:43:15.708868   45580 main.go:141] libmachine: (embed-certs-304541) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:15.709246   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetConfigRaw
	I1128 00:43:15.709989   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.712312   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712623   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.712660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712968   45580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/config.json ...
	I1128 00:43:15.713166   45580 machine.go:88] provisioning docker machine ...
	I1128 00:43:15.713183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:15.713360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713552   45580 buildroot.go:166] provisioning hostname "embed-certs-304541"
	I1128 00:43:15.713573   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713731   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.716027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716386   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.716419   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716530   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.716703   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.716856   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.717034   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.717229   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.717565   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.717579   45580 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-304541 && echo "embed-certs-304541" | sudo tee /etc/hostname
	I1128 00:43:15.841766   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-304541
	
	I1128 00:43:15.841821   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.844529   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.844872   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.844919   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.845037   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.845231   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845476   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.845616   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.845976   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.846002   45580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-304541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-304541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-304541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:15.965821   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:15.965855   45580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:15.965876   45580 buildroot.go:174] setting up certificates
	I1128 00:43:15.965890   45580 provision.go:83] configureAuth start
	I1128 00:43:15.965903   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.966183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.968916   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969285   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.969313   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969483   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.971549   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.971913   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.971949   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.972092   45580 provision.go:138] copyHostCerts
	I1128 00:43:15.972168   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:15.972182   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:15.972260   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:15.972415   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:15.972427   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:15.972472   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:15.972562   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:15.972572   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:15.972603   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:15.972663   45580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.embed-certs-304541 san=[192.168.50.93 192.168.50.93 localhost 127.0.0.1 minikube embed-certs-304541]
	I1128 00:43:16.272269   45580 provision.go:172] copyRemoteCerts
	I1128 00:43:16.272333   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:16.272354   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.274793   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275102   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.275138   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275285   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.275495   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.275628   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.275752   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.361853   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:43:16.386340   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:16.410490   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:16.433471   45580 provision.go:86] duration metric: configureAuth took 467.56808ms
	I1128 00:43:16.433505   45580 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:16.433686   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:16.433760   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.436514   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.436987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.437029   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.437129   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.437316   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437472   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437614   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.437748   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.438055   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.438072   45580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:16.732374   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:16.732407   45580 machine.go:91] provisioned docker machine in 1.019227514s
	I1128 00:43:16.732419   45580 start.go:300] post-start starting for "embed-certs-304541" (driver="kvm2")
	I1128 00:43:16.732429   45580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:16.732474   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.732847   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:16.732879   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.735564   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.735987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.736027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.736210   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.736393   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.736549   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.736714   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.824741   45580 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:16.829313   45580 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:16.829347   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:16.829426   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:16.829529   45580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:16.829642   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:16.839740   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:16.862881   45580 start.go:303] post-start completed in 130.432418ms
	I1128 00:43:16.862911   45580 fix.go:56] fixHost completed within 21.946020541s
	I1128 00:43:16.862938   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.865721   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.866144   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866336   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.866545   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866744   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866869   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.867046   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.867350   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.867359   45580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:16.981759   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132196.930241591
	
	I1128 00:43:16.981779   45580 fix.go:206] guest clock: 1701132196.930241591
	I1128 00:43:16.981786   45580 fix.go:219] Guest: 2023-11-28 00:43:16.930241591 +0000 UTC Remote: 2023-11-28 00:43:16.862915941 +0000 UTC m=+249.133993071 (delta=67.32565ms)
	I1128 00:43:16.981804   45580 fix.go:190] guest clock delta is within tolerance: 67.32565ms
	I1128 00:43:16.981809   45580 start.go:83] releasing machines lock for "embed-certs-304541", held for 22.064954687s
	I1128 00:43:16.981848   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.982121   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:16.984621   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.984927   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.984986   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.985171   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985675   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985825   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985892   45580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:16.985926   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.986025   45580 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:16.986054   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.988651   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.988839   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989079   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989367   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989411   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989451   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989491   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989544   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989648   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989692   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989781   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989860   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.989933   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:17.104567   45580 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:17.110844   45580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:17.254201   45580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:17.262078   45580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:17.262154   45580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:17.282179   45580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:17.282209   45580 start.go:472] detecting cgroup driver to use...
	I1128 00:43:17.282271   45580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:17.296891   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:17.311479   45580 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:17.311552   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:17.325946   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:17.340513   45580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:17.469601   45580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:17.605127   45580 docker.go:219] disabling docker service ...
	I1128 00:43:17.605199   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:17.621850   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:17.634608   45580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:17.753009   45580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:17.859260   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:17.872564   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:17.889701   45580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:17.889755   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.898724   45580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:17.898799   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.907565   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.916243   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.925280   45580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:17.934933   45580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:17.943902   45580 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:17.943960   45580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:17.957608   45580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:17.967379   45580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:18.074173   45580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:18.251191   45580 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:18.251264   45580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:18.259963   45580 start.go:540] Will wait 60s for crictl version
	I1128 00:43:18.260041   45580 ssh_runner.go:195] Run: which crictl
	I1128 00:43:18.263936   45580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:18.303087   45580 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:18.303181   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.344939   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.402444   45580 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:17.009281   45815 main.go:141] libmachine: (no-preload-473615) Calling .Start
	I1128 00:43:17.009442   45815 main.go:141] libmachine: (no-preload-473615) Ensuring networks are active...
	I1128 00:43:17.010161   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network default is active
	I1128 00:43:17.010485   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network mk-no-preload-473615 is active
	I1128 00:43:17.010860   45815 main.go:141] libmachine: (no-preload-473615) Getting domain xml...
	I1128 00:43:17.011780   45815 main.go:141] libmachine: (no-preload-473615) Creating domain...
	I1128 00:43:18.289916   45815 main.go:141] libmachine: (no-preload-473615) Waiting to get IP...
	I1128 00:43:18.290892   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.291348   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.291434   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.291321   46604 retry.go:31] will retry after 208.579367ms: waiting for machine to come up
	I1128 00:43:18.501947   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.502401   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.502431   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.502362   46604 retry.go:31] will retry after 296.427399ms: waiting for machine to come up
	I1128 00:43:18.403974   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:18.406811   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407171   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:18.407201   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407459   45580 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:18.411727   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:18.423460   45580 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:18.423570   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:18.463722   45580 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:18.463797   45580 ssh_runner.go:195] Run: which lz4
	I1128 00:43:18.468008   45580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:18.472523   45580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:18.472560   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:43:20.378745   45580 crio.go:444] Took 1.910818 seconds to copy over tarball
	I1128 00:43:20.378836   45580 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:18.801131   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.801707   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.801741   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.801666   46604 retry.go:31] will retry after 355.365314ms: waiting for machine to come up
	I1128 00:43:19.159088   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.159590   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.159628   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.159550   46604 retry.go:31] will retry after 584.908889ms: waiting for machine to come up
	I1128 00:43:19.746379   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.746941   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.746978   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.746901   46604 retry.go:31] will retry after 707.432097ms: waiting for machine to come up
	I1128 00:43:20.455880   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:20.456378   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:20.456402   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:20.456346   46604 retry.go:31] will retry after 598.57984ms: waiting for machine to come up
	I1128 00:43:21.056102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.056548   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.056579   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.056500   46604 retry.go:31] will retry after 742.55033ms: waiting for machine to come up
	I1128 00:43:21.800382   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.800895   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.800926   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.800841   46604 retry.go:31] will retry after 1.138217867s: waiting for machine to come up
	I1128 00:43:22.941401   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:22.941902   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:22.941932   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:22.941867   46604 retry.go:31] will retry after 1.552423219s: waiting for machine to come up
	I1128 00:43:23.310969   45580 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.932089296s)
	I1128 00:43:23.311004   45580 crio.go:451] Took 2.932228 seconds to extract the tarball
	I1128 00:43:23.311017   45580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:43:23.351844   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:23.397599   45580 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:43:23.397625   45580 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:43:23.397705   45580 ssh_runner.go:195] Run: crio config
	I1128 00:43:23.460298   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:23.460326   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:23.460348   45580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:23.460383   45580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.93 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-304541 NodeName:embed-certs-304541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:23.460547   45580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-304541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:23.460641   45580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-304541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:23.460696   45580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:43:23.470334   45580 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:23.470410   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:23.480675   45580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1128 00:43:23.497482   45580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:43:23.513709   45580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1128 00:43:23.530363   45580 ssh_runner.go:195] Run: grep 192.168.50.93	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:23.533938   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:23.546399   45580 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541 for IP: 192.168.50.93
	I1128 00:43:23.546443   45580 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:23.546632   45580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:23.546695   45580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:23.546799   45580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/client.key
	I1128 00:43:23.546892   45580 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key.9bda4d83
	I1128 00:43:23.546960   45580 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key
	I1128 00:43:23.547122   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:23.547178   45580 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:23.547196   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:23.547237   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:23.547280   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:23.547317   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:23.547392   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:23.548287   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:23.571910   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 00:43:23.597339   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:23.621977   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:43:23.648048   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:23.671213   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:23.695307   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:23.719122   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:23.743153   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:23.766469   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:23.789932   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:23.813950   45580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:23.830291   45580 ssh_runner.go:195] Run: openssl version
	I1128 00:43:23.837945   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:23.847572   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852284   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852334   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.860003   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:23.872829   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:23.886286   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.892997   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.893079   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.899923   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:23.909771   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:23.919498   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924066   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924126   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.929583   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:23.939366   45580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:23.944091   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:23.950255   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:23.956493   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:23.962278   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:23.970032   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:23.977660   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:23.984257   45580 kubeadm.go:404] StartCluster: {Name:embed-certs-304541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:23.984408   45580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:23.984471   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:24.026147   45580 cri.go:89] found id: ""
	I1128 00:43:24.026222   45580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:24.035520   45580 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:24.035550   45580 kubeadm.go:636] restartCluster start
	I1128 00:43:24.035631   45580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:24.044318   45580 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.045591   45580 kubeconfig.go:92] found "embed-certs-304541" server: "https://192.168.50.93:8443"
	I1128 00:43:24.047987   45580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:24.056482   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.056541   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.067055   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.067072   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.067108   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.076950   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.577344   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.577441   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.588707   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.077862   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.077965   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.089729   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.577938   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.578019   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.593191   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.077819   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.077891   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.091224   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.577757   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.577844   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.588769   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.077106   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.077235   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.088668   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.577169   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.577249   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.588221   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.496599   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:24.496989   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:24.497018   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:24.496943   46604 retry.go:31] will retry after 2.05343917s: waiting for machine to come up
	I1128 00:43:26.552249   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:26.552684   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:26.552716   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:26.552636   46604 retry.go:31] will retry after 2.338063311s: waiting for machine to come up
	I1128 00:43:28.077161   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.077265   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.088552   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.577077   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.577168   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.588335   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.077927   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.078027   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.089679   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.577193   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.577293   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.077430   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.077542   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.088547   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.577088   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.577203   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.077809   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.077907   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.090329   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.577897   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.577975   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.591561   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.077101   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.077206   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.087945   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.577446   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.577528   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.588542   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.893450   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:28.893812   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:28.893841   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:28.893761   46604 retry.go:31] will retry after 3.578756905s: waiting for machine to come up
	I1128 00:43:32.473719   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:32.474199   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:32.474234   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:32.474155   46604 retry.go:31] will retry after 3.070637163s: waiting for machine to come up
	I1128 00:43:36.805769   46126 start.go:369] acquired machines lock for "default-k8s-diff-port-488423" in 2m54.466321295s
	I1128 00:43:36.805830   46126 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:36.805840   46126 fix.go:54] fixHost starting: 
	I1128 00:43:36.806271   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:36.806311   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:36.825195   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I1128 00:43:36.825723   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:36.826325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:43:36.826348   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:36.826703   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:36.826932   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:36.827106   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:43:36.828683   46126 fix.go:102] recreateIfNeeded on default-k8s-diff-port-488423: state=Stopped err=<nil>
	I1128 00:43:36.828709   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	W1128 00:43:36.828895   46126 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:36.830377   46126 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-488423" ...
	I1128 00:43:36.831614   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Start
	I1128 00:43:36.831781   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring networks are active...
	I1128 00:43:36.832447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network default is active
	I1128 00:43:36.832841   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network mk-default-k8s-diff-port-488423 is active
	I1128 00:43:36.833220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Getting domain xml...
	I1128 00:43:36.833947   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Creating domain...
	I1128 00:43:33.077031   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.077109   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.088430   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:33.578007   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.578093   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.589185   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:34.056684   45580 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:43:34.056718   45580 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:43:34.056733   45580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:43:34.056836   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:34.096078   45580 cri.go:89] found id: ""
	I1128 00:43:34.096157   45580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:43:34.111200   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:43:34.119603   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:43:34.119654   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128150   45580 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128170   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.236389   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.879134   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.070594   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.159436   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.223694   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:43:35.223787   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.238511   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.753955   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.254449   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.753943   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.253987   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.753515   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.777619   45580 api_server.go:72] duration metric: took 2.553922938s to wait for apiserver process to appear ...
	I1128 00:43:37.777646   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:43:35.548294   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.548718   45815 main.go:141] libmachine: (no-preload-473615) Found IP for machine: 192.168.61.195
	I1128 00:43:35.548746   45815 main.go:141] libmachine: (no-preload-473615) Reserving static IP address...
	I1128 00:43:35.548790   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has current primary IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.549194   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.549223   45815 main.go:141] libmachine: (no-preload-473615) DBG | skip adding static IP to network mk-no-preload-473615 - found existing host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"}
	I1128 00:43:35.549238   45815 main.go:141] libmachine: (no-preload-473615) Reserved static IP address: 192.168.61.195
	I1128 00:43:35.549253   45815 main.go:141] libmachine: (no-preload-473615) Waiting for SSH to be available...
	I1128 00:43:35.549265   45815 main.go:141] libmachine: (no-preload-473615) DBG | Getting to WaitForSSH function...
	I1128 00:43:35.551246   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551573   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.551601   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551757   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH client type: external
	I1128 00:43:35.551778   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa (-rw-------)
	I1128 00:43:35.551811   45815 main.go:141] libmachine: (no-preload-473615) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:35.551831   45815 main.go:141] libmachine: (no-preload-473615) DBG | About to run SSH command:
	I1128 00:43:35.551867   45815 main.go:141] libmachine: (no-preload-473615) DBG | exit 0
	I1128 00:43:35.636291   45815 main.go:141] libmachine: (no-preload-473615) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:35.636667   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetConfigRaw
	I1128 00:43:35.637278   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.639799   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640164   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.640209   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640423   45815 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/config.json ...
	I1128 00:43:35.640598   45815 machine.go:88] provisioning docker machine ...
	I1128 00:43:35.640632   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:35.640853   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641071   45815 buildroot.go:166] provisioning hostname "no-preload-473615"
	I1128 00:43:35.641090   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641242   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.643554   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643845   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.643905   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643977   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.644140   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644370   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.644540   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.644971   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.644986   45815 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473615 && echo "no-preload-473615" | sudo tee /etc/hostname
	I1128 00:43:35.766635   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473615
	
	I1128 00:43:35.766689   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.769704   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770068   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.770108   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.770463   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770622   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770733   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.770849   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.771214   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.771235   45815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473615/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:35.889378   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:35.889416   45815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:35.889480   45815 buildroot.go:174] setting up certificates
	I1128 00:43:35.889494   45815 provision.go:83] configureAuth start
	I1128 00:43:35.889506   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.889810   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.892924   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893313   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.893359   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.895759   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896140   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.896169   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896281   45815 provision.go:138] copyHostCerts
	I1128 00:43:35.896345   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:35.896370   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:35.896448   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:35.896565   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:35.896577   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:35.896618   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:35.896713   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:35.896728   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:35.896778   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:35.896856   45815 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.no-preload-473615 san=[192.168.61.195 192.168.61.195 localhost 127.0.0.1 minikube no-preload-473615]
	I1128 00:43:36.080367   45815 provision.go:172] copyRemoteCerts
	I1128 00:43:36.080429   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:36.080451   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.082989   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083327   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.083358   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083529   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.083745   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.083927   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.084077   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.166338   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:36.191867   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:36.214184   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:36.237102   45815 provision.go:86] duration metric: configureAuth took 347.594627ms
	I1128 00:43:36.237135   45815 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:36.237338   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:43:36.237421   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.240408   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240787   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.240826   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240995   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.241193   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241368   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241539   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.241712   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.242000   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.242016   45815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:36.565582   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:36.565609   45815 machine.go:91] provisioned docker machine in 924.985826ms
	I1128 00:43:36.565623   45815 start.go:300] post-start starting for "no-preload-473615" (driver="kvm2")
	I1128 00:43:36.565649   45815 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:36.565677   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.565994   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:36.566025   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.568653   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569032   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.569064   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569148   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.569337   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.569502   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.569678   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.655695   45815 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:36.659909   45815 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:36.659941   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:36.660020   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:36.660108   45815 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:36.660228   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:36.669575   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:36.690970   45815 start.go:303] post-start completed in 125.33198ms
	I1128 00:43:36.690998   45815 fix.go:56] fixHost completed within 19.708998537s
	I1128 00:43:36.691022   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.693929   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694361   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.694400   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694665   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.694877   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695064   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695237   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.695404   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.695738   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.695750   45815 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:36.805602   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132216.779589412
	
	I1128 00:43:36.805626   45815 fix.go:206] guest clock: 1701132216.779589412
	I1128 00:43:36.805637   45815 fix.go:219] Guest: 2023-11-28 00:43:36.779589412 +0000 UTC Remote: 2023-11-28 00:43:36.691003095 +0000 UTC m=+237.986754258 (delta=88.586317ms)
	I1128 00:43:36.805673   45815 fix.go:190] guest clock delta is within tolerance: 88.586317ms
	I1128 00:43:36.805678   45815 start.go:83] releasing machines lock for "no-preload-473615", held for 19.823720426s
	I1128 00:43:36.805705   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.805989   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:36.808864   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809316   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.809346   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809529   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810162   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810361   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810441   45815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:36.810494   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.810824   45815 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:36.810845   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.813747   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.813979   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814064   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814263   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814444   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814471   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814508   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814659   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814764   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.814844   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814913   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.815484   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.815640   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.923054   45815 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:36.930078   45815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:37.082251   45815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:37.088817   45815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:37.088890   45815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:37.110921   45815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:37.110950   45815 start.go:472] detecting cgroup driver to use...
	I1128 00:43:37.111017   45815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:37.128450   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:37.144814   45815 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:37.144875   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:37.158185   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:37.170311   45815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:37.287910   45815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:37.414142   45815 docker.go:219] disabling docker service ...
	I1128 00:43:37.414222   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:37.427085   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:37.438631   45815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:37.559028   45815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:37.676646   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:37.689214   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:37.709298   45815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:37.709370   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.718368   45815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:37.718446   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.727188   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.736230   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.745594   45815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:37.755149   45815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:37.763179   45815 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:37.763237   45815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:37.780091   45815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:37.790861   45815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:37.923396   45815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:38.133933   45815 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:38.134013   45815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:38.143538   45815 start.go:540] Will wait 60s for crictl version
	I1128 00:43:38.143598   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.149212   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:38.205988   45815 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:38.206079   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.261211   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.315398   45815 out.go:177] * Preparing Kubernetes v1.29.0-rc.0 on CRI-O 1.24.1 ...
	I1128 00:43:38.317052   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:38.320262   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320708   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:38.320736   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320976   45815 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:38.325437   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:38.337411   45815 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 00:43:38.337457   45815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:38.384218   45815 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.0". assuming images are not preloaded.
	I1128 00:43:38.384245   45815 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.0 registry.k8s.io/kube-controller-manager:v1.29.0-rc.0 registry.k8s.io/kube-scheduler:v1.29.0-rc.0 registry.k8s.io/kube-proxy:v1.29.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:43:38.384325   45815 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.384533   45815 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.384553   45815 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1128 00:43:38.384634   45815 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.384726   45815 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.384817   45815 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.384870   45815 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.384931   45815 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.386318   45815 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.386368   45815 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1128 00:43:38.386381   45815 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.386373   45815 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.386324   45815 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.386316   45815 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.386319   45815 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.386326   45815 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.526945   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.527246   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1128 00:43:38.538042   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.538097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.539522   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.549538   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.557097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.621381   45815 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" does not exist at hash "4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9" in container runtime
	I1128 00:43:38.621440   45815 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.621516   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.208059   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting to get IP...
	I1128 00:43:38.209168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209599   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.209572   46749 retry.go:31] will retry after 256.562292ms: waiting for machine to come up
	I1128 00:43:38.468199   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468798   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468828   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.468722   46749 retry.go:31] will retry after 287.91937ms: waiting for machine to come up
	I1128 00:43:38.758157   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758610   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758640   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.758555   46749 retry.go:31] will retry after 377.696379ms: waiting for machine to come up
	I1128 00:43:39.138269   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138761   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138795   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.138706   46749 retry.go:31] will retry after 476.093256ms: waiting for machine to come up
	I1128 00:43:39.616256   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616611   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616638   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.616577   46749 retry.go:31] will retry after 628.654941ms: waiting for machine to come up
	I1128 00:43:40.246993   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247498   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247543   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.247455   46749 retry.go:31] will retry after 607.981973ms: waiting for machine to come up
	I1128 00:43:40.857220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857634   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857663   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.857592   46749 retry.go:31] will retry after 866.108704ms: waiting for machine to come up
	I1128 00:43:41.725140   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725695   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725716   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:41.725609   46749 retry.go:31] will retry after 1.158669064s: waiting for machine to come up
	I1128 00:43:37.777663   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.028441   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.028478   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.028492   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.043818   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.043846   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.544532   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.551469   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:42.551505   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.044055   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.050233   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:43.050262   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.544857   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.550155   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:43:43.558929   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:43:43.558962   45580 api_server.go:131] duration metric: took 5.781308354s to wait for apiserver health ...
	I1128 00:43:43.558974   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:43.558984   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:43.560872   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:43:38.775724   45815 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1128 00:43:38.775776   45815 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.775827   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.775953   45815 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1128 00:43:38.776035   45815 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" does not exist at hash "e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7" in container runtime
	I1128 00:43:38.776059   45815 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.776106   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776188   45815 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" does not exist at hash "e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4" in container runtime
	I1128 00:43:38.776220   45815 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.776247   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776315   45815 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.0" does not exist at hash "df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55" in container runtime
	I1128 00:43:38.776335   45815 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.776360   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776456   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.776562   45815 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.776601   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.792457   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.792533   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.792584   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.792634   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.792714   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.929517   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.929640   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.941438   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941544   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941623   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.941704   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.964773   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1128 00:43:38.964890   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:38.964980   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965038   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965118   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1128 00:43:38.965175   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:38.965250   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0 (exists)
	I1128 00:43:38.965262   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.965291   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.970386   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1128 00:43:38.970443   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0 (exists)
	I1128 00:43:38.970458   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0 (exists)
	I1128 00:43:38.974722   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1128 00:43:38.974970   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0 (exists)
	I1128 00:43:39.286976   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143462   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0: (2.178138495s)
	I1128 00:43:41.143491   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0 from cache
	I1128 00:43:41.143520   45815 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143536   45815 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.856517641s)
	I1128 00:43:41.143563   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143596   45815 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1128 00:43:41.143630   45815 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143678   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:43.335836   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.192246706s)
	I1128 00:43:43.335894   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1128 00:43:43.335859   45815 ssh_runner.go:235] Completed: which crictl: (2.192168329s)
	I1128 00:43:43.335938   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335970   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335971   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:42.886014   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886540   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886564   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:42.886457   46749 retry.go:31] will retry after 1.698662705s: waiting for machine to come up
	I1128 00:43:44.586452   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586892   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586917   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:44.586848   46749 retry.go:31] will retry after 1.681392058s: waiting for machine to come up
	I1128 00:43:46.270022   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270545   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270578   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:46.270491   46749 retry.go:31] will retry after 2.061464637s: waiting for machine to come up
	I1128 00:43:43.562274   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:43:43.583729   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:43:43.614704   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:43:43.627543   45580 system_pods.go:59] 8 kube-system pods found
	I1128 00:43:43.627587   45580 system_pods.go:61] "coredns-5dd5756b68-crmfq" [e412b41a-a4a4-4c8c-8fe9-b96c52e5815c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:43:43.627602   45580 system_pods.go:61] "etcd-embed-certs-304541" [ceeea55a-ffbb-4c18-b563-3552f8d47f3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:43:43.627622   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [e7bd6f60-fe90-4413-b906-0101ad3bda9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:43:43.627632   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [e083fd78-3aad-44ed-8bac-fc72eeded7f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:43:43.627652   45580 system_pods.go:61] "kube-proxy-6d4rt" [bc801fd6-e726-41d3-afcf-5ed86723dca9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:43:43.627665   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [df10b58f-43ec-4492-8d95-0d91ee88fec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:43:43.627676   45580 system_pods.go:61] "metrics-server-57f55c9bc5-sx4m7" [1618a041-6077-4076-8178-f2692dc983b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:43:43.627686   45580 system_pods.go:61] "storage-provisioner" [acaed13d-b10c-4fb6-b2b7-452cf928e1e5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:43:43.627696   45580 system_pods.go:74] duration metric: took 12.96707ms to wait for pod list to return data ...
	I1128 00:43:43.627709   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:43:43.632593   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:43:43.632628   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:43:43.632642   45580 node_conditions.go:105] duration metric: took 4.924217ms to run NodePressure ...
	I1128 00:43:43.632667   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:43.945692   45580 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950639   45580 kubeadm.go:787] kubelet initialised
	I1128 00:43:43.950666   45580 kubeadm.go:788] duration metric: took 4.940609ms waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950677   45580 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:43:43.956229   45580 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:45.975328   45580 pod_ready.go:102] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:46.036655   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0: (2.700640635s)
	I1128 00:43:46.036696   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0 from cache
	I1128 00:43:46.036721   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036786   45815 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.700708537s)
	I1128 00:43:46.036846   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1128 00:43:46.036792   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036943   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:48.418287   45815 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.381312759s)
	I1128 00:43:48.418326   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0: (2.381419374s)
	I1128 00:43:48.418339   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1128 00:43:48.418346   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0 from cache
	I1128 00:43:48.418370   45815 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.418426   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.333973   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334509   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:48.334432   46749 retry.go:31] will retry after 3.421790433s: waiting for machine to come up
	I1128 00:43:51.757991   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:51.758448   46749 retry.go:31] will retry after 3.726327818s: waiting for machine to come up
	I1128 00:43:48.484870   45580 pod_ready.go:92] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:48.484903   45580 pod_ready.go:81] duration metric: took 4.52864781s waiting for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:48.484916   45580 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006488   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.006516   45580 pod_ready.go:81] duration metric: took 521.591023ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006528   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014231   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.014258   45580 pod_ready.go:81] duration metric: took 7.721879ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014270   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:51.284611   45580 pod_ready.go:102] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:52.636848   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.218389263s)
	I1128 00:43:52.636883   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1128 00:43:52.636912   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:52.636964   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:56.745904   45269 start.go:369] acquired machines lock for "old-k8s-version-732472" in 56.827856444s
	I1128 00:43:56.745949   45269 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:56.745959   45269 fix.go:54] fixHost starting: 
	I1128 00:43:56.746379   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:56.746447   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:56.764386   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I1128 00:43:56.764907   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:56.765554   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:43:56.765584   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:56.766037   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:56.766221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:43:56.766365   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:43:56.768054   45269 fix.go:102] recreateIfNeeded on old-k8s-version-732472: state=Stopped err=<nil>
	I1128 00:43:56.768082   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	W1128 00:43:56.768219   45269 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:56.771618   45269 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-732472" ...
	I1128 00:43:55.486531   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487099   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Found IP for machine: 192.168.72.242
	I1128 00:43:55.487128   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserving static IP address...
	I1128 00:43:55.487158   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has current primary IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487539   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.487574   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | skip adding static IP to network mk-default-k8s-diff-port-488423 - found existing host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"}
	I1128 00:43:55.487595   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserved static IP address: 192.168.72.242
	I1128 00:43:55.487609   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for SSH to be available...
	I1128 00:43:55.487622   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Getting to WaitForSSH function...
	I1128 00:43:55.489858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490219   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.490253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490324   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH client type: external
	I1128 00:43:55.490373   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa (-rw-------)
	I1128 00:43:55.490414   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:55.490431   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | About to run SSH command:
	I1128 00:43:55.490447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | exit 0
	I1128 00:43:55.584551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:55.584987   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetConfigRaw
	I1128 00:43:55.585628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.588444   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.588889   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.588924   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.589207   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:43:55.589475   46126 machine.go:88] provisioning docker machine ...
	I1128 00:43:55.589501   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:55.589744   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590007   46126 buildroot.go:166] provisioning hostname "default-k8s-diff-port-488423"
	I1128 00:43:55.590031   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590203   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.592733   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593136   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.593170   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593313   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.593480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593756   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.593918   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.594316   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.594333   46126 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-488423 && echo "default-k8s-diff-port-488423" | sudo tee /etc/hostname
	I1128 00:43:55.739338   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-488423
	
	I1128 00:43:55.739368   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.742483   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.742870   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.742906   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.743009   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.743215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743365   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743512   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.743669   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.744119   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.744140   46126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-488423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-488423/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-488423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:55.883117   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:55.883146   46126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:55.883185   46126 buildroot.go:174] setting up certificates
	I1128 00:43:55.883198   46126 provision.go:83] configureAuth start
	I1128 00:43:55.883216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.883566   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.886292   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886625   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.886652   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886796   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.888873   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889213   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.889233   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889347   46126 provision.go:138] copyHostCerts
	I1128 00:43:55.889401   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:55.889413   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:55.889478   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:55.889611   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:55.889623   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:55.889650   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:55.889729   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:55.889738   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:55.889765   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:55.889848   46126 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-488423 san=[192.168.72.242 192.168.72.242 localhost 127.0.0.1 minikube default-k8s-diff-port-488423]
	I1128 00:43:55.945434   46126 provision.go:172] copyRemoteCerts
	I1128 00:43:55.945516   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:55.945547   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.948894   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949387   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.949422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949800   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.950023   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.950215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.950367   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.045647   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:56.069972   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1128 00:43:56.093947   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:56.118840   46126 provision.go:86] duration metric: configureAuth took 235.628083ms
	I1128 00:43:56.118867   46126 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:56.119072   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:56.119159   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.122135   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122514   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.122550   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122680   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.122884   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123076   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.123418   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.123729   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.123746   46126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:56.476330   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:56.476360   46126 machine.go:91] provisioned docker machine in 886.868182ms
	I1128 00:43:56.476384   46126 start.go:300] post-start starting for "default-k8s-diff-port-488423" (driver="kvm2")
	I1128 00:43:56.476399   46126 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:56.476422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.476787   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:56.476824   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.479803   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.480208   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480342   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.480542   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.480729   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.480901   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.574040   46126 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:56.578163   46126 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:56.578186   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:56.578247   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:56.578339   46126 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:56.578455   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:56.586455   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.613452   46126 start.go:303] post-start completed in 137.050871ms
	I1128 00:43:56.613484   46126 fix.go:56] fixHost completed within 19.807643021s
	I1128 00:43:56.613510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.616834   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.617253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.617686   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.617859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.618105   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.618302   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.618618   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.618630   46126 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:56.745691   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132236.690190729
	
	I1128 00:43:56.745711   46126 fix.go:206] guest clock: 1701132236.690190729
	I1128 00:43:56.745731   46126 fix.go:219] Guest: 2023-11-28 00:43:56.690190729 +0000 UTC Remote: 2023-11-28 00:43:56.613489194 +0000 UTC m=+194.421672716 (delta=76.701535ms)
	I1128 00:43:56.745784   46126 fix.go:190] guest clock delta is within tolerance: 76.701535ms
	I1128 00:43:56.745798   46126 start.go:83] releasing machines lock for "default-k8s-diff-port-488423", held for 19.939986738s
	I1128 00:43:56.745837   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.746091   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:56.749097   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749453   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.749486   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749648   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750192   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750392   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750446   46126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:56.750493   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.750661   46126 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:56.750685   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.753480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753655   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753948   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.753976   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754096   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754163   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.754191   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754241   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754327   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754474   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754489   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754621   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.754644   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754779   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.850794   46126 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:56.872044   46126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:57.016328   46126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:57.022389   46126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:57.022463   46126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:57.039925   46126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:57.039959   46126 start.go:472] detecting cgroup driver to use...
	I1128 00:43:57.040030   46126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:57.056385   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:57.068344   46126 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:57.068413   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:57.081752   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:57.095169   46126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:57.192392   46126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:56.772995   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Start
	I1128 00:43:56.773150   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring networks are active...
	I1128 00:43:56.774032   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network default is active
	I1128 00:43:56.774327   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network mk-old-k8s-version-732472 is active
	I1128 00:43:56.774732   45269 main.go:141] libmachine: (old-k8s-version-732472) Getting domain xml...
	I1128 00:43:56.775433   45269 main.go:141] libmachine: (old-k8s-version-732472) Creating domain...
	I1128 00:43:53.781169   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.781193   45580 pod_ready.go:81] duration metric: took 4.766915226s waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.781203   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789370   45580 pod_ready.go:92] pod "kube-proxy-6d4rt" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.789400   45580 pod_ready.go:81] duration metric: took 8.189391ms waiting for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789412   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794277   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.794299   45580 pod_ready.go:81] duration metric: took 4.87905ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794307   45580 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:55.984645   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:57.310000   46126 docker.go:219] disabling docker service ...
	I1128 00:43:57.310066   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:57.324484   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:57.339752   46126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:57.444051   46126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:57.557773   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:57.571662   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:57.591169   46126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:57.591230   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.605399   46126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:57.605462   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.617783   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.629258   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.639844   46126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:57.651810   46126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:57.663353   46126 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:57.663403   46126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:57.679095   46126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:57.688096   46126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:57.795868   46126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:57.970597   46126 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:57.970661   46126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:57.975830   46126 start.go:540] Will wait 60s for crictl version
	I1128 00:43:57.975900   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:43:57.980469   46126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:58.022819   46126 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:58.022932   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.078060   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.130219   46126 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:55.298307   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0: (2.661319898s)
	I1128 00:43:55.298330   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0 from cache
	I1128 00:43:55.298358   45815 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:55.298411   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:56.256987   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1128 00:43:56.257041   45815 cache_images.go:123] Successfully loaded all cached images
	I1128 00:43:56.257048   45815 cache_images.go:92] LoadImages completed in 17.872790347s
	I1128 00:43:56.257142   45815 ssh_runner.go:195] Run: crio config
	I1128 00:43:56.342206   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:43:56.342230   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:56.342248   45815 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:56.342265   45815 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.195 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473615 NodeName:no-preload-473615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:56.342421   45815 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473615"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:56.342519   45815 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:56.342581   45815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.0
	I1128 00:43:56.352200   45815 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:56.352275   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:56.360863   45815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1128 00:43:56.378620   45815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1128 00:43:56.396120   45815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1128 00:43:56.415090   45815 ssh_runner.go:195] Run: grep 192.168.61.195	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:56.419072   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:56.434497   45815 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615 for IP: 192.168.61.195
	I1128 00:43:56.434534   45815 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:56.434702   45815 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:56.434766   45815 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:56.434899   45815 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.key
	I1128 00:43:56.434990   45815 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key.6c770a2d
	I1128 00:43:56.435043   45815 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key
	I1128 00:43:56.435190   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:56.435231   45815 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:56.435249   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:56.435280   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:56.435317   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:56.435348   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:56.435402   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.436170   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:56.464712   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:43:56.492394   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:56.517331   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:43:56.540656   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:56.562997   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:56.587574   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:56.614358   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:56.640027   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:56.666632   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:56.690925   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:56.716816   45815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:56.734079   45815 ssh_runner.go:195] Run: openssl version
	I1128 00:43:56.739942   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:56.751230   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757607   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757662   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.764184   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:56.777196   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:56.788408   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793610   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793667   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.799203   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:56.809821   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:56.820489   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825268   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825335   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.830869   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:56.843707   45815 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:56.848717   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:56.855268   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:56.861889   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:56.867773   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:56.874642   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:56.882143   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:56.889812   45815 kubeadm.go:404] StartCluster: {Name:no-preload-473615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:56.889969   45815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:56.890021   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:56.931994   45815 cri.go:89] found id: ""
	I1128 00:43:56.932061   45815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:56.941996   45815 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:56.942014   45815 kubeadm.go:636] restartCluster start
	I1128 00:43:56.942074   45815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:56.950854   45815 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.951919   45815 kubeconfig.go:92] found "no-preload-473615" server: "https://192.168.61.195:8443"
	I1128 00:43:56.954777   45815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:56.963839   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.963902   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.974803   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.974821   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.974869   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.989023   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.489949   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.490022   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:57.501695   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.989930   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.990014   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.002435   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.489856   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.489946   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.506641   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.131523   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:58.134378   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.134826   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:58.134859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.135087   46126 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:58.139363   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:58.151488   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:58.151552   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:58.193551   46126 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:58.193618   46126 ssh_runner.go:195] Run: which lz4
	I1128 00:43:58.197624   46126 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:58.201842   46126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:58.201875   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:44:00.068140   46126 crio.go:444] Took 1.870561 seconds to copy over tarball
	I1128 00:44:00.068221   46126 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:58.122924   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting to get IP...
	I1128 00:43:58.123826   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.124165   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.124263   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.124146   46882 retry.go:31] will retry after 249.216665ms: waiting for machine to come up
	I1128 00:43:58.374969   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.375510   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.375537   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.375457   46882 retry.go:31] will retry after 317.223146ms: waiting for machine to come up
	I1128 00:43:58.694027   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.694483   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.694535   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.694443   46882 retry.go:31] will retry after 362.880377ms: waiting for machine to come up
	I1128 00:43:59.058976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.059623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.059650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.059571   46882 retry.go:31] will retry after 545.497342ms: waiting for machine to come up
	I1128 00:43:59.606962   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.607607   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.607633   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.607558   46882 retry.go:31] will retry after 678.467206ms: waiting for machine to come up
	I1128 00:44:00.287531   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:00.288062   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:00.288103   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:00.288054   46882 retry.go:31] will retry after 817.7633ms: waiting for machine to come up
	I1128 00:44:01.107179   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:01.107748   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:01.107776   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:01.107690   46882 retry.go:31] will retry after 1.02533736s: waiting for machine to come up
	I1128 00:44:02.134384   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:02.134940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:02.134972   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:02.134867   46882 retry.go:31] will retry after 1.291264059s: waiting for machine to come up
	I1128 00:43:58.491595   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:00.983179   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:58.989453   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.989568   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.006339   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.489912   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.490007   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.505297   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.989924   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.990020   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.004118   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.489346   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.489421   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.504026   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.989739   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.989828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.006279   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.489872   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.489975   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.504734   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.989185   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.989269   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.000313   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.489165   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.489246   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.505444   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.989956   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.990024   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.003038   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.489556   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.489663   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.502192   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.282407   46126 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.2141625s)
	I1128 00:44:03.282432   46126 crio.go:451] Took 3.214263 seconds to extract the tarball
	I1128 00:44:03.282440   46126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:03.324470   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:03.375858   46126 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:44:03.375881   46126 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:44:03.375944   46126 ssh_runner.go:195] Run: crio config
	I1128 00:44:03.440441   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:03.440462   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:03.440479   46126 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:03.440496   46126 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.242 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-488423 NodeName:default-k8s-diff-port-488423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:44:03.440670   46126 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.242
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-488423"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:03.440746   46126 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-488423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1128 00:44:03.440830   46126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:44:03.450060   46126 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:03.450138   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:03.458748   46126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1128 00:44:03.475315   46126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:03.492886   46126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1128 00:44:03.509665   46126 ssh_runner.go:195] Run: grep 192.168.72.242	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:03.513441   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:03.527336   46126 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423 for IP: 192.168.72.242
	I1128 00:44:03.527373   46126 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:03.527539   46126 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:03.527592   46126 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:03.527690   46126 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.key
	I1128 00:44:03.527770   46126 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key.05574f60
	I1128 00:44:03.527827   46126 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key
	I1128 00:44:03.527966   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:03.528009   46126 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:03.528024   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:03.528062   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:03.528098   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:03.528133   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:03.528188   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:03.528787   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:03.553210   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:03.578548   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:03.604661   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:03.627640   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:03.653147   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:03.681991   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:03.706068   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:03.730092   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:03.751326   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:03.776165   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:03.801844   46126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:03.819762   46126 ssh_runner.go:195] Run: openssl version
	I1128 00:44:03.826895   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:03.836806   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842921   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842983   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.848802   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:03.859065   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:03.869720   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874600   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874670   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.880712   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:03.891524   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:03.901286   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906102   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906163   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.911563   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:03.921606   46126 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:03.926553   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:03.932640   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:03.938482   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:03.944483   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:03.950430   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:03.956197   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:03.962543   46126 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:03.962647   46126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:03.962700   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:04.014418   46126 cri.go:89] found id: ""
	I1128 00:44:04.014499   46126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:04.024132   46126 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:04.024178   46126 kubeadm.go:636] restartCluster start
	I1128 00:44:04.024239   46126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:04.032856   46126 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.034010   46126 kubeconfig.go:92] found "default-k8s-diff-port-488423" server: "https://192.168.72.242:8444"
	I1128 00:44:04.036458   46126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:04.044461   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.044513   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.054697   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.054714   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.054759   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.066995   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.567687   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.567784   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.579528   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.067882   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.067970   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.082579   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.568116   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.568240   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.579606   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.067125   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.067229   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.078637   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.567159   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.567258   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.578623   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:07.067770   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.067864   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.081883   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.427919   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:03.428413   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:03.428442   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:03.428350   46882 retry.go:31] will retry after 1.150784696s: waiting for machine to come up
	I1128 00:44:04.580519   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:04.580976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:04.581008   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:04.580941   46882 retry.go:31] will retry after 1.981268381s: waiting for machine to come up
	I1128 00:44:06.564123   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:06.564623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:06.564641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:06.564596   46882 retry.go:31] will retry after 2.79895226s: waiting for machine to come up
	I1128 00:44:02.984445   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:05.483562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:03.989899   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.995828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.009197   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.489749   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.489829   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.501445   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.989934   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.990019   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.004077   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.489549   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.489634   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.501227   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.989858   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.989940   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.003151   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.489699   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.489785   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.502937   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.964667   45815 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:06.964705   45815 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:06.964720   45815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:06.964808   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:07.008487   45815 cri.go:89] found id: ""
	I1128 00:44:07.008572   45815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:07.028576   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:07.040057   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:07.040130   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050063   45815 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050085   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:07.199305   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.265283   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.065924411s)
	I1128 00:44:08.265324   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.468254   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.570027   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.650823   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:08.650900   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:08.667640   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:07.567667   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.567751   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.580778   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.067282   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.067368   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.080618   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.567146   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.567232   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.580324   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.067606   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.067728   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.083426   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.567987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.568084   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.579657   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.067205   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.067292   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.082466   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.568064   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.568159   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.583356   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.067987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.068114   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.084486   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.567945   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.568057   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.583108   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:12.068099   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.068186   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.079172   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.366118   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:09.366642   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:09.366677   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:09.366580   46882 retry.go:31] will retry after 2.538437833s: waiting for machine to come up
	I1128 00:44:11.906292   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:11.906799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:11.906823   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:11.906751   46882 retry.go:31] will retry after 4.351501946s: waiting for machine to come up
	I1128 00:44:07.983966   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.985333   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:12.483805   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.182449   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:09.681686   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.181905   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.681692   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.181652   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.209900   45815 api_server.go:72] duration metric: took 2.559073582s to wait for apiserver process to appear ...
	I1128 00:44:11.209935   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:11.209954   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.242230   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.242261   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.242276   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.285509   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.285538   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.786232   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.791529   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:15.791565   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.285909   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.290996   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:16.291040   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.786199   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.792488   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:44:16.805778   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:44:16.805807   45815 api_server.go:131] duration metric: took 5.595863517s to wait for apiserver health ...
	I1128 00:44:16.805817   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:44:16.805825   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:16.807924   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:12.567969   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.568085   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.579496   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.068092   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.068164   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.079081   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.567677   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.567773   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.579000   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:14.044782   46126 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:14.044818   46126 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:14.044832   46126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:14.044927   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:14.090411   46126 cri.go:89] found id: ""
	I1128 00:44:14.090487   46126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:14.106216   46126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:14.116309   46126 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:14.116367   46126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125060   46126 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125082   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.259194   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.923712   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.113501   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.221455   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.317171   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:15.317269   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.332625   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.847268   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.347347   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.847441   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.259741   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260326   45269 main.go:141] libmachine: (old-k8s-version-732472) Found IP for machine: 192.168.39.172
	I1128 00:44:16.260347   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserving static IP address...
	I1128 00:44:16.260368   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has current primary IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.260978   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | skip adding static IP to network mk-old-k8s-version-732472 - found existing host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"}
	I1128 00:44:16.261003   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Getting to WaitForSSH function...
	I1128 00:44:16.261021   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserved static IP address: 192.168.39.172
	I1128 00:44:16.261037   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting for SSH to be available...
	I1128 00:44:16.264000   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264370   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.264402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264496   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH client type: external
	I1128 00:44:16.264560   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa (-rw-------)
	I1128 00:44:16.264600   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:44:16.264624   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | About to run SSH command:
	I1128 00:44:16.264641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | exit 0
	I1128 00:44:16.373651   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | SSH cmd err, output: <nil>: 
	I1128 00:44:16.374185   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetConfigRaw
	I1128 00:44:16.374992   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.378530   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.378958   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.378987   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.379390   45269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/config.json ...
	I1128 00:44:16.379622   45269 machine.go:88] provisioning docker machine ...
	I1128 00:44:16.379646   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:16.379854   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380005   45269 buildroot.go:166] provisioning hostname "old-k8s-version-732472"
	I1128 00:44:16.380024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380152   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.382908   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383346   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.383376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383604   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.383824   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384179   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.384365   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.384875   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.384902   45269 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-732472 && echo "old-k8s-version-732472" | sudo tee /etc/hostname
	I1128 00:44:16.547302   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-732472
	
	I1128 00:44:16.547378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.550883   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551409   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.551448   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551634   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.551888   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552113   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552258   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.552465   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.552965   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.552994   45269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-732472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-732472/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-732472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:44:16.705539   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:44:16.705577   45269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:44:16.705601   45269 buildroot.go:174] setting up certificates
	I1128 00:44:16.705611   45269 provision.go:83] configureAuth start
	I1128 00:44:16.705622   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.705962   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.708726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709231   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.709283   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709531   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.712023   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712491   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.712524   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712658   45269 provision.go:138] copyHostCerts
	I1128 00:44:16.712720   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:44:16.712734   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:44:16.712835   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:44:16.712990   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:44:16.713005   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:44:16.713041   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:44:16.713154   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:44:16.713168   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:44:16.713201   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:44:16.713291   45269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-732472 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube old-k8s-version-732472]
	I1128 00:44:17.255079   45269 provision.go:172] copyRemoteCerts
	I1128 00:44:17.255157   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:44:17.255184   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.258078   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258486   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.258522   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258704   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.258892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.259071   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.259278   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.360891   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:44:14.981992   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.984334   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.809569   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:16.837545   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:16.884377   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:16.901252   45815 system_pods.go:59] 9 kube-system pods found
	I1128 00:44:16.901296   45815 system_pods.go:61] "coredns-76f75df574-54p94" [fc2580d3-8c03-46c8-aa43-fce9472a4bbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901310   45815 system_pods.go:61] "coredns-76f75df574-9ptz7" [c51a1796-37bb-411b-8477-fb4d8c7e7cb2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901322   45815 system_pods.go:61] "etcd-no-preload-473615" [c789418f-23b1-4e84-95df-e339afc358e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:16.901337   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [204c5f02-7e14-4761-9af0-606f227dee63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:16.901351   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [2d96a78f-b0c9-4731-a8a1-ec63459a09ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:16.901368   45815 system_pods.go:61] "kube-proxy-trr4j" [df593d3d-db4c-45f9-ad79-f35fe2cdef84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:16.901379   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [5fe2c87b-af8b-4184-8b62-399e488dcb5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:16.901393   45815 system_pods.go:61] "metrics-server-57f55c9bc5-lh4m8" [4c3ae55b-befb-44d2-8982-592acdf3eab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:16.901408   45815 system_pods.go:61] "storage-provisioner" [a3e71dd4-570e-4895-aac4-d98dfbd69a6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:16.901423   45815 system_pods.go:74] duration metric: took 17.023663ms to wait for pod list to return data ...
	I1128 00:44:16.901434   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:16.905738   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:16.905766   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:16.905776   45815 node_conditions.go:105] duration metric: took 4.335236ms to run NodePressure ...
	I1128 00:44:16.905791   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:17.532813   45815 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548788   45815 kubeadm.go:787] kubelet initialised
	I1128 00:44:17.548814   45815 kubeadm.go:788] duration metric: took 15.969396ms waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548824   45815 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:17.569590   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:17.388160   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:44:17.415589   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:44:17.443880   45269 provision.go:86] duration metric: configureAuth took 738.257631ms
	I1128 00:44:17.443913   45269 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:44:17.444142   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:44:17.444240   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.447355   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447699   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.447726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447980   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.448213   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448382   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448542   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.448730   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.449148   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.449173   45269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:44:17.825162   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:44:17.825202   45269 machine.go:91] provisioned docker machine in 1.445550198s
	I1128 00:44:17.825215   45269 start.go:300] post-start starting for "old-k8s-version-732472" (driver="kvm2")
	I1128 00:44:17.825229   45269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:44:17.825255   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:17.825631   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:44:17.825665   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.829047   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.829813   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829885   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.830108   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.830270   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.830427   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.933926   45269 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:44:17.939164   45269 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:44:17.939192   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:44:17.939273   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:44:17.939364   45269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:44:17.939481   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:44:17.950901   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:17.983827   45269 start.go:303] post-start completed in 158.593642ms
	I1128 00:44:17.983856   45269 fix.go:56] fixHost completed within 21.237897087s
	I1128 00:44:17.983880   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.988473   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.988983   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.989011   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.989353   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.989611   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989755   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989981   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.990202   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.990729   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.990748   45269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:44:18.139114   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132258.087547922
	
	I1128 00:44:18.139142   45269 fix.go:206] guest clock: 1701132258.087547922
	I1128 00:44:18.139154   45269 fix.go:219] Guest: 2023-11-28 00:44:18.087547922 +0000 UTC Remote: 2023-11-28 00:44:17.983860571 +0000 UTC m=+360.654750753 (delta=103.687351ms)
	I1128 00:44:18.139206   45269 fix.go:190] guest clock delta is within tolerance: 103.687351ms
	I1128 00:44:18.139217   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 21.393285553s
	I1128 00:44:18.139256   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.139552   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:18.142899   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.143407   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143562   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144123   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144308   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144414   45269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:44:18.144473   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.144586   45269 ssh_runner.go:195] Run: cat /version.json
	I1128 00:44:18.144614   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.147761   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.147994   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148459   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148542   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148581   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148605   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148878   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.148892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.149080   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149094   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149266   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149288   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149473   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.149488   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.271569   45269 ssh_runner.go:195] Run: systemctl --version
	I1128 00:44:18.277814   45269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:44:18.432301   45269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:44:18.438677   45269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:44:18.438749   45269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:44:18.455128   45269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:44:18.455155   45269 start.go:472] detecting cgroup driver to use...
	I1128 00:44:18.455229   45269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:44:18.472928   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:44:18.490329   45269 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:44:18.490409   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:44:18.505705   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:44:18.523509   45269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:44:18.696691   45269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:44:18.829641   45269 docker.go:219] disabling docker service ...
	I1128 00:44:18.829775   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:44:18.847903   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:44:18.863690   45269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:44:19.002181   45269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:44:19.130955   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:44:19.146034   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:44:19.165714   45269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 00:44:19.165790   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.176303   45269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:44:19.176368   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.186698   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.196137   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.205054   45269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:44:19.215067   45269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:44:19.224332   45269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:44:19.224376   45269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:44:19.238079   45269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:44:19.246692   45269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:44:19.360913   45269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:44:19.548488   45269 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:44:19.548563   45269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:44:19.553293   45269 start.go:540] Will wait 60s for crictl version
	I1128 00:44:19.553362   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:19.557103   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:44:19.605572   45269 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:44:19.605662   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.655808   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.709415   45269 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1128 00:44:17.346814   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.847354   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.878161   46126 api_server.go:72] duration metric: took 2.560990106s to wait for apiserver process to appear ...
	I1128 00:44:17.878189   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:17.878218   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.878696   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:17.878732   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.879110   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:18.379800   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:19.710653   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:19.713912   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714358   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:19.714402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714586   45269 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:44:19.719516   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:19.736367   45269 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 00:44:19.736422   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:19.788917   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:19.789021   45269 ssh_runner.go:195] Run: which lz4
	I1128 00:44:19.793502   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:44:19.797933   45269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:44:19.797967   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1128 00:44:21.595649   45269 crio.go:444] Took 1.802185 seconds to copy over tarball
	I1128 00:44:21.595754   45269 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:44:19.483696   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:21.485632   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:19.612824   45815 pod_ready.go:102] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:22.111469   45815 pod_ready.go:92] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.111506   45815 pod_ready.go:81] duration metric: took 4.541884744s waiting for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.111522   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118896   45815 pod_ready.go:92] pod "coredns-76f75df574-9ptz7" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.118916   45815 pod_ready.go:81] duration metric: took 7.386009ms waiting for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118925   45815 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.651574   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.651606   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.651632   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.731086   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.731124   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.879396   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.889686   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:22.889721   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.380219   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.387416   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.387458   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.880170   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.886215   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.886286   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:24.380095   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:24.387531   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:44:24.411131   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:44:24.411169   46126 api_server.go:131] duration metric: took 6.532961174s to wait for apiserver health ...
	I1128 00:44:24.411180   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:24.411186   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:24.701599   46126 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:24.853101   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:24.878687   46126 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:24.924669   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:24.942030   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:44:24.942063   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:24.942074   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:24.942084   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:24.942094   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:24.942104   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:24.942115   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:24.942134   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:24.942152   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:24.942163   46126 system_pods.go:74] duration metric: took 17.475554ms to wait for pod list to return data ...
	I1128 00:44:24.942224   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:26.037379   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:26.037423   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:26.037450   46126 node_conditions.go:105] duration metric: took 1.095218932s to run NodePressure ...
	I1128 00:44:26.037473   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:27.084620   46126 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.047120714s)
	I1128 00:44:27.084659   46126 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100248   46126 kubeadm.go:787] kubelet initialised
	I1128 00:44:27.100282   46126 kubeadm.go:788] duration metric: took 15.606572ms waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100293   46126 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:27.108069   46126 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.117188   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117221   46126 pod_ready.go:81] duration metric: took 9.127662ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.117238   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117247   46126 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.123182   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123213   46126 pod_ready.go:81] duration metric: took 5.9547ms waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.123226   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123235   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.130170   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130196   46126 pod_ready.go:81] duration metric: took 6.952194ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.130209   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130216   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.136895   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136925   46126 pod_ready.go:81] duration metric: took 6.699975ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.136940   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136950   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:24.811723   45269 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.215918902s)
	I1128 00:44:24.811757   45269 crio.go:451] Took 3.216081 seconds to extract the tarball
	I1128 00:44:24.811769   45269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:24.856120   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:24.918138   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:24.918185   45269 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:44:24.918257   45269 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.918296   45269 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.918305   45269 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1128 00:44:24.918314   45269 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.918297   45269 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.918261   45269 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.918264   45269 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.918585   45269 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.919955   45269 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.919959   45269 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.919988   45269 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.919964   45269 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.920093   45269 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.920302   45269 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.920482   45269 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.920497   45269 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1128 00:44:25.041095   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.048823   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.071401   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1128 00:44:25.073489   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.081089   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.083887   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.100582   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.150855   45269 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1128 00:44:25.150909   45269 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.150960   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.151148   45269 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1128 00:44:25.151198   45269 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.151250   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.181984   45269 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1128 00:44:25.182039   45269 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1128 00:44:25.182089   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.260634   45269 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1128 00:44:25.260687   45269 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.260744   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269386   45269 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1128 00:44:25.269436   45269 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1128 00:44:25.269460   45269 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.269480   45269 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.269508   45269 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1128 00:44:25.269517   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269539   45269 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.269552   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269573   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269626   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.269642   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.269701   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1128 00:44:25.269733   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.368354   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1128 00:44:25.368405   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1128 00:44:25.368462   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1128 00:44:25.368474   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.368536   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.368537   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.375204   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.375378   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1128 00:44:25.439797   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1128 00:44:25.465699   45269 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1128 00:44:25.465731   45269 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465788   45269 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465795   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1128 00:44:25.465810   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1128 00:44:25.797872   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:27.031275   45269 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.233351991s)
	I1128 00:44:27.031525   45269 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.565711109s)
	I1128 00:44:27.031549   45269 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1128 00:44:27.031594   45269 cache_images.go:92] LoadImages completed in 2.113388877s
	W1128 00:44:27.031667   45269 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1128 00:44:27.031754   45269 ssh_runner.go:195] Run: crio config
	I1128 00:44:27.100851   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:27.100882   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:27.100901   45269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:27.100924   45269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-732472 NodeName:old-k8s-version-732472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1128 00:44:27.101119   45269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-732472"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-732472
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.172:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:27.101241   45269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-732472 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:44:27.101312   45269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1128 00:44:27.111964   45269 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:27.112049   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:27.122796   45269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1128 00:44:27.149768   45269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:27.168520   45269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1128 00:44:27.187296   45269 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:27.191606   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:27.205482   45269 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472 for IP: 192.168.39.172
	I1128 00:44:27.205521   45269 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:27.205720   45269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:27.205758   45269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:27.205825   45269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.key
	I1128 00:44:27.205885   45269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key.ee96354a
	I1128 00:44:27.205931   45269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key
	I1128 00:44:27.206060   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:27.206115   45269 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:27.206130   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:27.206176   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:27.206214   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:27.206251   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:27.206313   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:27.207009   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:27.233932   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:27.258138   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:27.282203   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:27.309304   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:27.335945   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:27.360118   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:23.984808   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.118398   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:27.491683   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491724   46126 pod_ready.go:81] duration metric: took 354.756767ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.491736   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491745   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.890269   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890299   46126 pod_ready.go:81] duration metric: took 398.544263ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.890316   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890324   46126 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.289016   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289043   46126 pod_ready.go:81] duration metric: took 398.709637ms waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:28.289055   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289062   46126 pod_ready.go:38] duration metric: took 1.188759196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:28.289084   46126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:44:28.301648   46126 ops.go:34] apiserver oom_adj: -16
	I1128 00:44:28.301676   46126 kubeadm.go:640] restartCluster took 24.277487612s
	I1128 00:44:28.301683   46126 kubeadm.go:406] StartCluster complete in 24.339149368s
	I1128 00:44:28.301697   46126 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.301770   46126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:44:28.303560   46126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.303802   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:44:28.303915   46126 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:44:28.303994   46126 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304023   46126 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304038   46126 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:44:28.304040   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:44:28.304063   46126 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304117   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304118   46126 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304134   46126 addons.go:240] addon metrics-server should already be in state true
	I1128 00:44:28.304220   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304547   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304589   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304669   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304741   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304928   46126 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304956   46126 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-488423"
	I1128 00:44:28.305388   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.305437   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.310450   46126 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-488423" context rescaled to 1 replicas
	I1128 00:44:28.310496   46126 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:44:28.312602   46126 out.go:177] * Verifying Kubernetes components...
	I1128 00:44:28.314027   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:44:28.321407   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I1128 00:44:28.321423   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I1128 00:44:28.322247   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322287   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322797   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322820   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.322942   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322968   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.323210   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323242   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I1128 00:44:28.323323   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323556   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.323775   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323818   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323857   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323891   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323937   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.323957   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.324293   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.324471   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.327954   46126 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.327972   46126 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:44:28.327993   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.328327   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.328355   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.342376   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I1128 00:44:28.342789   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.343325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.343366   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.343751   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.343978   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I1128 00:44:28.343995   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.344392   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.344983   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.345009   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.345366   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.345910   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.348242   46126 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:28.346449   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I1128 00:44:28.350126   46126 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.350147   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:44:28.350166   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.346666   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.350250   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.348589   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.350911   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.350930   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.351442   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.351817   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.353691   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.353876   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.355460   46126 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:44:24.141365   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.518655   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.887843   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.887877   45815 pod_ready.go:81] duration metric: took 4.768943982s waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.887891   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909504   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.909600   45815 pod_ready.go:81] duration metric: took 21.699474ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909627   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.354335   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.354504   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.357068   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:44:28.357088   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:44:28.357094   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.357109   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.357228   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.357356   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.357475   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.360015   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360725   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.360785   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360994   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.361177   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.361341   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.361503   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.368150   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I1128 00:44:28.368511   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.369005   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.369023   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.369326   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.369481   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.370807   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.371066   46126 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.371078   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:44:28.371092   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.373819   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374409   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.374510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.374541   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374602   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.374688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.374768   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.474380   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.505183   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:44:28.505206   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:44:28.536550   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.584832   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:44:28.584857   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:44:28.626477   46126 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 00:44:28.626473   46126 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:28.644406   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:28.644436   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:44:28.671872   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:29.867337   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330746736s)
	I1128 00:44:29.867437   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867451   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867490   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.393076585s)
	I1128 00:44:29.867532   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867553   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867827   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.867841   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.867850   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867988   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868006   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868029   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.868038   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.868129   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.868145   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868159   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868381   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868400   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868429   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.876482   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.876505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.876724   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.876736   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885484   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213575767s)
	I1128 00:44:29.885534   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885841   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.885862   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885873   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885883   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885887   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886153   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886164   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.886194   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.886211   46126 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-488423"
	I1128 00:44:29.889173   46126 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:44:29.890607   46126 addons.go:502] enable addons completed in 1.586699021s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:44:30.716680   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.385529   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:27.411354   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:27.439142   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:27.466763   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:27.497738   45269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:27.518132   45269 ssh_runner.go:195] Run: openssl version
	I1128 00:44:27.524720   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:27.537673   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542561   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542623   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.548137   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:27.558112   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:27.568318   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573638   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573697   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.579739   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:27.589908   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:27.599937   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606264   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606340   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.612850   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:27.623388   45269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:27.628140   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:27.634670   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:27.642071   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:27.650207   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:27.656836   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:27.662837   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:27.668909   45269 kubeadm.go:404] StartCluster: {Name:old-k8s-version-732472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:27.669005   45269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:27.669075   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:27.711918   45269 cri.go:89] found id: ""
	I1128 00:44:27.711993   45269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:27.722058   45269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:27.722084   45269 kubeadm.go:636] restartCluster start
	I1128 00:44:27.722146   45269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:27.731619   45269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.733224   45269 kubeconfig.go:92] found "old-k8s-version-732472" server: "https://192.168.39.172:8443"
	I1128 00:44:27.736867   45269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:27.747794   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.747862   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.762055   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.762079   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.762146   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.773241   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.273910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.274001   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.286159   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.773393   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.773492   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.785781   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.274130   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.274199   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.289388   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.773916   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.774022   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.789483   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.273920   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.274026   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.285579   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.773910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.774005   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.785536   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.273906   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.273977   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.285344   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.774284   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.774352   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.786435   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.273928   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.274008   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.289424   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.484735   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.983088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:28.945293   45815 pod_ready.go:102] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.445111   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.445133   45815 pod_ready.go:81] duration metric: took 3.535488087s waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.445143   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450322   45815 pod_ready.go:92] pod "kube-proxy-trr4j" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.450342   45815 pod_ready.go:81] duration metric: took 5.193276ms waiting for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450350   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455002   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.455021   45815 pod_ready.go:81] duration metric: took 4.664949ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455030   45815 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:32.915566   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.717086   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:33.216905   46126 node_ready.go:49] node "default-k8s-diff-port-488423" has status "Ready":"True"
	I1128 00:44:33.216930   46126 node_ready.go:38] duration metric: took 4.590426391s waiting for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:33.216938   46126 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:33.223257   46126 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744567   46126 pod_ready.go:92] pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:33.744592   46126 pod_ready.go:81] duration metric: took 521.313062ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744601   46126 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:35.763867   46126 pod_ready.go:102] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.773549   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.773643   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.785461   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.273911   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.273994   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.285646   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.773944   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.774046   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.786576   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.273902   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.273969   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.285791   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.773895   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.773965   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.785934   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.273675   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.273738   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.285549   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.773954   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.774041   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.786010   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.273591   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.273659   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.284794   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.773864   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.773931   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.786610   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:37.273899   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:37.274025   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:37.285678   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.983159   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:34.985149   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.482210   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:35.413821   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.417790   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.768358   46126 pod_ready.go:92] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.768398   46126 pod_ready.go:81] duration metric: took 4.023788643s waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.768411   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775805   46126 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.775835   46126 pod_ready.go:81] duration metric: took 7.41435ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775847   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788110   46126 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.788139   46126 pod_ready.go:81] duration metric: took 12.28235ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788151   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018402   46126 pod_ready.go:92] pod "kube-proxy-2sfbm" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.018426   46126 pod_ready.go:81] duration metric: took 230.267334ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018443   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818531   46126 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.818559   46126 pod_ready.go:81] duration metric: took 800.108369ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818572   46126 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:41.127953   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.748214   45269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:37.748260   45269 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:37.748276   45269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:37.748334   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:37.796781   45269 cri.go:89] found id: ""
	I1128 00:44:37.796866   45269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:37.814832   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:37.824395   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:37.824469   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833592   45269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833618   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:37.955071   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:38.939529   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.160852   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.243789   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.372434   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:39.372525   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.405594   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.927024   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.426600   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.927163   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.966905   45269 api_server.go:72] duration metric: took 1.594470962s to wait for apiserver process to appear ...
	I1128 00:44:40.966937   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:40.966959   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967412   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:40.967457   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967851   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:41.468536   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:39.483204   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:41.483578   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:39.914738   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:42.415305   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:43.130157   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:45.628970   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.468813   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1128 00:44:46.468859   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:43.984318   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.483855   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:44.914911   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.415274   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.435553   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.435586   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.435601   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.480977   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.481002   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.481012   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.506064   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.506098   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.968355   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.974731   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:47.974766   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.468954   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.484597   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:48.484627   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.968810   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.979310   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:44:48.987751   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:44:48.987782   45269 api_server.go:131] duration metric: took 8.020836981s to wait for apiserver health ...
	I1128 00:44:48.987793   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:48.987801   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:48.989720   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:48.129394   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:50.130239   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:48.991320   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:49.001231   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:49.019895   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:49.027389   45269 system_pods.go:59] 7 kube-system pods found
	I1128 00:44:49.027417   45269 system_pods.go:61] "coredns-5644d7b6d9-9sh7z" [dcc226fb-5fd9-4757-bd93-1113f185cdce] Running
	I1128 00:44:49.027422   45269 system_pods.go:61] "etcd-old-k8s-version-732472" [a5899a5a-4812-41e1-9251-78fdaeea9597] Running
	I1128 00:44:49.027428   45269 system_pods.go:61] "kube-apiserver-old-k8s-version-732472" [13d2df8c-84a3-4bd4-8eab-ed9f732a3839] Running
	I1128 00:44:49.027435   45269 system_pods.go:61] "kube-controller-manager-old-k8s-version-732472" [6dc1e479-1a3a-4b9e-acd6-1183a25aece4] Running
	I1128 00:44:49.027441   45269 system_pods.go:61] "kube-proxy-jqrks" [e8fd665a-099e-4941-a8f2-917d2b864eeb] Running
	I1128 00:44:49.027447   45269 system_pods.go:61] "kube-scheduler-old-k8s-version-732472" [de147a31-927e-4051-b6ae-05ddf59290c8] Running
	I1128 00:44:49.027457   45269 system_pods.go:61] "storage-provisioner" [8d7e725e-6c26-4435-8605-88c7d924f5ca] Running
	I1128 00:44:49.027469   45269 system_pods.go:74] duration metric: took 7.544096ms to wait for pod list to return data ...
	I1128 00:44:49.027479   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:49.032133   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:49.032170   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:49.032183   45269 node_conditions.go:105] duration metric: took 4.695493ms to run NodePressure ...
	I1128 00:44:49.032203   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:49.293443   45269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:49.297880   45269 retry.go:31] will retry after 216.894607ms: kubelet not initialised
	I1128 00:44:49.528912   45269 retry.go:31] will retry after 354.406288ms: kubelet not initialised
	I1128 00:44:49.897328   45269 retry.go:31] will retry after 462.959721ms: kubelet not initialised
	I1128 00:44:50.368260   45269 retry.go:31] will retry after 930.99638ms: kubelet not initialised
	I1128 00:44:51.303993   45269 retry.go:31] will retry after 1.275477572s: kubelet not initialised
	I1128 00:44:48.984387   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:51.482900   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:49.916072   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.415253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.626182   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.626822   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:56.627881   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.584797   45269 retry.go:31] will retry after 2.542158001s: kubelet not initialised
	I1128 00:44:55.132600   45269 retry.go:31] will retry after 1.850404606s: kubelet not initialised
	I1128 00:44:56.987924   45269 retry.go:31] will retry after 2.371310185s: kubelet not initialised
	I1128 00:44:53.483557   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:55.982236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.916135   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:57.415818   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.127409   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:01.629561   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.366141   45269 retry.go:31] will retry after 8.068803464s: kubelet not initialised
	I1128 00:44:57.983189   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:00.482336   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.483708   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.915991   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.414672   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.127296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.127766   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.484008   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.983257   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.415147   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.914282   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.128322   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:10.627792   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:07.439538   45269 retry.go:31] will retry after 10.31431504s: kubelet not initialised
	I1128 00:45:08.985186   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.481933   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.914385   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.414899   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:12.628874   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:14.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.126592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.487653   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.983710   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.915497   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.915686   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:18.416396   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:19.127337   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:21.128352   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.759682   45269 retry.go:31] will retry after 12.137072248s: kubelet not initialised
	I1128 00:45:18.482187   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.982360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.915228   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.918669   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:23.630252   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.128326   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.982597   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:24.983348   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.985418   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:25.415620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:27.914150   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:28.626533   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:30.633655   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.902379   45269 kubeadm.go:787] kubelet initialised
	I1128 00:45:29.902403   45269 kubeadm.go:788] duration metric: took 40.608931816s waiting for restarted kubelet to initialise ...
	I1128 00:45:29.902410   45269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:45:29.908442   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914018   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.914055   45269 pod_ready.go:81] duration metric: took 5.584146ms waiting for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914069   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918699   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.918720   45269 pod_ready.go:81] duration metric: took 4.644035ms waiting for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918729   45269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922818   45269 pod_ready.go:92] pod "etcd-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.922837   45269 pod_ready.go:81] duration metric: took 4.102217ms waiting for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922846   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927182   45269 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.927208   45269 pod_ready.go:81] duration metric: took 4.354519ms waiting for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927220   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301553   45269 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.301583   45269 pod_ready.go:81] duration metric: took 374.352863ms waiting for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301611   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700858   45269 pod_ready.go:92] pod "kube-proxy-jqrks" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.700879   45269 pod_ready.go:81] duration metric: took 399.260896ms waiting for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700890   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103319   45269 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:31.103340   45269 pod_ready.go:81] duration metric: took 402.442769ms waiting for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103349   45269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.482088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:31.483235   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.915117   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:32.416142   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.127196   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.127500   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.128846   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.422466   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.908596   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.983360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.983776   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:34.417575   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:36.915005   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.627473   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.126292   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.908783   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.909842   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.910185   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:38.481697   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:40.481935   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.483458   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.415244   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.127088   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.128254   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.409802   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.415828   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.986515   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:47.483162   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.414253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.416386   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.628705   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:51.130754   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.908171   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.910974   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:49.985617   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:52.483720   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.915063   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.915382   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.414813   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.627668   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.409415   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.420993   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:54.983055   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:56.983251   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.919627   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.415481   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.129666   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.629368   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:57.910151   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.408805   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:59.485375   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:01.983754   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.413478   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.129933   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.627697   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:02.410888   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.910323   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.482593   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:06.981922   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.414437   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.415659   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.628741   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.126717   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:12.127246   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.408374   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.411381   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.416658   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:08.982790   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.984134   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.914828   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.915812   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.135673   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.626139   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.909480   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.409873   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.481792   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:15.482823   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.416315   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.914123   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.628828   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.131592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.411060   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.910071   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:17.983098   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.482047   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:22.483266   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:19.413826   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.415442   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.626664   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.626823   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.424355   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.908255   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:24.984606   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.482265   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.915227   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:26.417059   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.628773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.126818   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.911487   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.409652   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:29.485507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.983913   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:28.916438   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.415565   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.626887   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.628401   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.128691   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.910776   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.421469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.482605   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:36.982844   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:33.913533   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.914337   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.914708   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.627072   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.627591   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.908233   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.910199   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:38.983620   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.482862   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.914965   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.915003   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.628492   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.127393   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:42.408895   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:44.409264   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.909077   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.483111   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:45.483236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.916039   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.415407   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.627253   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.127503   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.418512   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.427899   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:47.982977   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.983264   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.483168   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.914124   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:50.915620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.919567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.627296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.627334   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.908531   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:56.408610   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:54.983084   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.481889   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.414154   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.416518   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.126605   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.127372   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:02.127896   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.410152   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.910206   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.482177   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.982997   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.915381   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.915574   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.626760   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.628849   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.417243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.417887   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.983490   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.984161   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.414677   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.420179   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:09.127843   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:11.626987   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:07.908838   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.408385   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.482404   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.484146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.914093   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.922145   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.417231   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.627586   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.628294   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.410728   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.910177   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:16.910469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.982123   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.984037   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:17.483771   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.915323   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.415070   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.129617   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.628266   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.423065   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:21.908978   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.983122   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.482857   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.415232   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.915218   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.129285   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:25.627839   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.910794   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:26.409956   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.985146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.482512   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.916041   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.415836   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.627978   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.127213   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:32.127569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:28.413035   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.909092   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.483528   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.983745   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.913604   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.914567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.129952   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.626951   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:33.414345   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:35.414559   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.481916   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.482024   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.413520   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.414517   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.416081   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.627773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:41.126690   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:37.414665   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:39.908876   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.482323   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.983125   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.914615   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.415528   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.128692   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.627228   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:42.412788   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:44.909732   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:46.910133   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.482424   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.482507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.482562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.416841   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.914229   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:48.127074   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.627355   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.411030   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.420657   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.483765   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.982325   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.414235   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.414715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.627557   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:54.628111   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:57.129482   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.910232   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.409320   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.795074   45580 pod_ready.go:81] duration metric: took 4m0.000752019s waiting for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	E1128 00:47:53.795108   45580 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:47:53.795124   45580 pod_ready.go:38] duration metric: took 4m9.844437599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:47:53.795148   45580 kubeadm.go:640] restartCluster took 4m29.759592783s
	W1128 00:47:53.795209   45580 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:47:53.795237   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:47:54.416610   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.915781   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:59.129569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.627046   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.409599   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:00.409906   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.916155   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.416966   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:03.627676   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.126607   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:02.410451   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:04.411074   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.912243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.609428   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.814163406s)
	I1128 00:48:07.609508   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:07.624300   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:07.634606   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:07.644733   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:07.644802   45580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:03.915780   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.416602   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:08.128657   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:10.629487   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:09.411193   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.908147   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.867577   45580 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:08.915404   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.416668   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.129233   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.630498   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.909762   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:16.409160   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.916628   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.916715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:17.917022   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.126081   45580 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 00:48:19.126157   45580 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:19.126245   45580 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:19.126356   45580 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:19.126476   45580 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:19.126544   45580 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:19.128354   45580 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:19.128461   45580 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:19.128546   45580 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:19.128664   45580 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:19.128807   45580 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:19.128927   45580 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:19.129001   45580 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:19.129100   45580 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:19.129175   45580 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:19.129275   45580 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:19.129387   45580 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:19.129432   45580 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:19.129501   45580 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:19.129559   45580 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:19.129616   45580 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:19.129696   45580 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:19.129760   45580 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:19.129853   45580 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:19.129921   45580 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:19.131350   45580 out.go:204]   - Booting up control plane ...
	I1128 00:48:19.131462   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:19.131578   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:19.131674   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:19.131798   45580 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:19.131914   45580 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:19.131972   45580 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:19.132149   45580 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:19.132245   45580 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502916 seconds
	I1128 00:48:19.132388   45580 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:19.132540   45580 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:19.132619   45580 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:19.132850   45580 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-304541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:19.132959   45580 kubeadm.go:322] [bootstrap-token] Using token: tbyyd7.r005gkl9z2ll5pno
	I1128 00:48:19.134488   45580 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:19.134603   45580 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:19.134691   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:19.134841   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:19.135030   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:19.135200   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:19.135311   45580 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:19.135453   45580 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:19.135532   45580 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:19.135600   45580 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:19.135611   45580 kubeadm.go:322] 
	I1128 00:48:19.135692   45580 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:19.135700   45580 kubeadm.go:322] 
	I1128 00:48:19.135798   45580 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:19.135807   45580 kubeadm.go:322] 
	I1128 00:48:19.135840   45580 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:19.135916   45580 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:19.135987   45580 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:19.135996   45580 kubeadm.go:322] 
	I1128 00:48:19.136074   45580 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:19.136084   45580 kubeadm.go:322] 
	I1128 00:48:19.136153   45580 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:19.136161   45580 kubeadm.go:322] 
	I1128 00:48:19.136231   45580 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:19.136329   45580 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:19.136439   45580 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:19.136448   45580 kubeadm.go:322] 
	I1128 00:48:19.136552   45580 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:19.136662   45580 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:19.136674   45580 kubeadm.go:322] 
	I1128 00:48:19.136766   45580 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.136878   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:19.136907   45580 kubeadm.go:322] 	--control-plane 
	I1128 00:48:19.136913   45580 kubeadm.go:322] 
	I1128 00:48:19.136986   45580 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:19.136998   45580 kubeadm.go:322] 
	I1128 00:48:19.137097   45580 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.137259   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:19.137282   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:48:19.137290   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:19.138890   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:18.126502   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.131785   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:18.410659   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.910338   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.140172   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:19.160540   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:19.224333   45580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:19.224409   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.224455   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=embed-certs-304541 minikube.k8s.io/updated_at=2023_11_28T00_48_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.301346   45580 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:19.544274   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.656215   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.257645   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.757476   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.257246   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.757278   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.256655   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.757282   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.415051   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.914901   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.627184   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:24.627388   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.127311   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.409417   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:25.909086   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.257594   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:23.757135   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.257396   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.757508   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.257426   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.756605   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.256768   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.756656   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.256783   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.756856   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.414964   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.415763   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:28.257005   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:28.756875   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.256833   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.757261   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.257313   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.756918   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.257535   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.757356   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.917284   45580 kubeadm.go:1081] duration metric: took 12.692941702s to wait for elevateKubeSystemPrivileges.
	I1128 00:48:31.917326   45580 kubeadm.go:406] StartCluster complete in 5m7.933075195s
	I1128 00:48:31.917353   45580 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.917430   45580 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:48:31.919940   45580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.920855   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:48:31.921063   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:48:31.921037   45580 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:48:31.921110   45580 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-304541"
	I1128 00:48:31.921123   45580 addons.go:69] Setting default-storageclass=true in profile "embed-certs-304541"
	I1128 00:48:31.921143   45580 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-304541"
	I1128 00:48:31.921148   45580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-304541"
	W1128 00:48:31.921152   45580 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:48:31.921116   45580 addons.go:69] Setting metrics-server=true in profile "embed-certs-304541"
	I1128 00:48:31.921213   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921220   45580 addons.go:231] Setting addon metrics-server=true in "embed-certs-304541"
	W1128 00:48:31.921229   45580 addons.go:240] addon metrics-server should already be in state true
	I1128 00:48:31.921265   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921531   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921545   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921578   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921584   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921594   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921605   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.941345   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I1128 00:48:31.941374   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I1128 00:48:31.941359   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I1128 00:48:31.942009   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942040   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942449   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942460   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942477   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942488   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942549   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942937   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942955   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.943129   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943134   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943300   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943646   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.943671   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.943774   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.944439   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.944470   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.947579   45580 addons.go:231] Setting addon default-storageclass=true in "embed-certs-304541"
	W1128 00:48:31.947605   45580 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:48:31.947635   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.948083   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.948114   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.964906   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1128 00:48:31.964942   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1128 00:48:31.966157   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966261   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966778   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966795   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.966980   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966999   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.967444   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967481   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I1128 00:48:31.967447   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967636   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968331   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.968434   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968812   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.968830   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.969729   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972519   45580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:48:31.970271   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972982   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.974461   45580 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:31.974479   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:48:31.974498   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.976187   45580 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:48:31.974991   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.977660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.977907   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:48:31.977925   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:48:31.977943   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.978001   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.978243   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.978264   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.978506   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.978727   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.978954   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.979170   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.980878   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981226   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.981262   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981399   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.981571   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.981690   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.981810   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.997812   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I1128 00:48:31.998404   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.998989   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.999016   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.999427   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.999652   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:32.001212   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:32.001482   45580 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.001496   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:48:32.001513   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:32.002981   45580 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-304541" context rescaled to 1 replicas
	I1128 00:48:32.003019   45580 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:48:32.005961   45580 out.go:177] * Verifying Kubernetes components...
	I1128 00:48:29.127403   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:31.127830   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.911587   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.411923   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.004640   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.005211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:32.007586   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:32.007585   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:32.007700   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.007722   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:32.007894   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:32.008049   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:32.213297   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:48:32.213322   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:48:32.255646   45580 node_ready.go:35] waiting up to 6m0s for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.255743   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:48:32.268542   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.270044   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:48:32.270066   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:48:32.304458   45580 node_ready.go:49] node "embed-certs-304541" has status "Ready":"True"
	I1128 00:48:32.304486   45580 node_ready.go:38] duration metric: took 48.802082ms waiting for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.304498   45580 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:32.320550   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:32.437814   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:32.437852   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:48:32.462274   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:32.541622   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:29.418692   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.455152   45815 pod_ready.go:81] duration metric: took 4m0.000108261s waiting for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:30.455199   45815 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:30.455216   45815 pod_ready.go:38] duration metric: took 4m12.906382743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:30.455251   45815 kubeadm.go:640] restartCluster took 4m33.513232005s
	W1128 00:48:30.455312   45815 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:48:30.455356   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:48:34.327113   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.071322786s)
	I1128 00:48:34.327155   45580 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 00:48:34.342711   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.074127133s)
	I1128 00:48:34.342776   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.342791   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.343284   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343328   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.343339   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.343348   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343581   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343598   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.366719   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.366754   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.367052   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.367104   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.367119   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.467705   45580 pod_ready.go:102] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.935662   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473338078s)
	I1128 00:48:34.935745   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.935814   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936143   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.936184   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936193   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.936203   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.936211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936435   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936482   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977248   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.435573064s)
	I1128 00:48:34.977318   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977345   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.977738   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.977785   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.977806   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977824   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.979823   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.979823   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.979849   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.979860   45580 addons.go:467] Verifying addon metrics-server=true in "embed-certs-304541"
	I1128 00:48:34.981768   45580 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:48:33.129597   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.129880   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.912875   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.411225   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.983440   45580 addons.go:502] enable addons completed in 3.062399778s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:48:36.495977   45580 pod_ready.go:92] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.496002   45580 pod_ready.go:81] duration metric: took 4.175421265s waiting for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.496012   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508269   45580 pod_ready.go:92] pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.508293   45580 pod_ready.go:81] duration metric: took 12.274473ms waiting for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508302   45580 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515826   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.515855   45580 pod_ready.go:81] duration metric: took 7.545794ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515873   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523206   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.523271   45580 pod_ready.go:81] duration metric: took 7.388614ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523286   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529859   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.529881   45580 pod_ready.go:81] duration metric: took 6.58575ms waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529889   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857435   45580 pod_ready.go:92] pod "kube-proxy-w5ct2" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.857467   45580 pod_ready.go:81] duration metric: took 327.570428ms waiting for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857481   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257433   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:37.257455   45580 pod_ready.go:81] duration metric: took 399.966903ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257462   45580 pod_ready.go:38] duration metric: took 4.952954771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:37.257476   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:37.257523   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:37.275627   45580 api_server.go:72] duration metric: took 5.272574466s to wait for apiserver process to appear ...
	I1128 00:48:37.275656   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:37.275673   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:48:37.283884   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:48:37.285716   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:37.285744   45580 api_server.go:131] duration metric: took 10.080776ms to wait for apiserver health ...
	I1128 00:48:37.285766   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:37.460530   45580 system_pods.go:59] 9 kube-system pods found
	I1128 00:48:37.460555   45580 system_pods.go:61] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.460560   45580 system_pods.go:61] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.460563   45580 system_pods.go:61] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.460568   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.460572   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.460575   45580 system_pods.go:61] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.460579   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.460585   45580 system_pods.go:61] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.460589   45580 system_pods.go:61] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.460597   45580 system_pods.go:74] duration metric: took 174.824783ms to wait for pod list to return data ...
	I1128 00:48:37.460619   45580 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:37.656404   45580 default_sa.go:45] found service account: "default"
	I1128 00:48:37.656431   45580 default_sa.go:55] duration metric: took 195.805836ms for default service account to be created ...
	I1128 00:48:37.656444   45580 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:37.861049   45580 system_pods.go:86] 9 kube-system pods found
	I1128 00:48:37.861086   45580 system_pods.go:89] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.861095   45580 system_pods.go:89] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.861101   45580 system_pods.go:89] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.861108   45580 system_pods.go:89] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.861116   45580 system_pods.go:89] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.861122   45580 system_pods.go:89] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.861128   45580 system_pods.go:89] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.861140   45580 system_pods.go:89] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.861157   45580 system_pods.go:89] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.861171   45580 system_pods.go:126] duration metric: took 204.720501ms to wait for k8s-apps to be running ...
	I1128 00:48:37.861187   45580 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:37.861241   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:37.875344   45580 system_svc.go:56] duration metric: took 14.150294ms WaitForService to wait for kubelet.
	I1128 00:48:37.875380   45580 kubeadm.go:581] duration metric: took 5.872335245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:37.875407   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:38.057075   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:38.057106   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:38.057117   45580 node_conditions.go:105] duration metric: took 181.705529ms to run NodePressure ...
	I1128 00:48:38.057127   45580 start.go:228] waiting for startup goroutines ...
	I1128 00:48:38.057133   45580 start.go:233] waiting for cluster config update ...
	I1128 00:48:38.057141   45580 start.go:242] writing updated cluster config ...
	I1128 00:48:38.057366   45580 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:38.107014   45580 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:38.109071   45580 out.go:177] * Done! kubectl is now configured to use "embed-certs-304541" cluster and "default" namespace by default
	I1128 00:48:37.626062   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:38.819130   46126 pod_ready.go:81] duration metric: took 4m0.000531461s waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:38.819159   46126 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:38.819168   46126 pod_ready.go:38] duration metric: took 4m5.602220781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:38.819189   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:38.819216   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:38.819269   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:38.882052   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:38.882075   46126 cri.go:89] found id: ""
	I1128 00:48:38.882084   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:38.882143   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.886688   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:38.886751   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:38.926163   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:38.926190   46126 cri.go:89] found id: ""
	I1128 00:48:38.926197   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:38.926259   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.930505   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:38.930558   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:38.979793   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:38.979816   46126 cri.go:89] found id: ""
	I1128 00:48:38.979823   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:38.979876   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.984146   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:38.984244   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:39.033485   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:39.033509   46126 cri.go:89] found id: ""
	I1128 00:48:39.033519   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:39.033575   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.038977   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:39.039038   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:39.079669   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:39.079697   46126 cri.go:89] found id: ""
	I1128 00:48:39.079707   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:39.079767   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.084447   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:39.084515   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:39.121494   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:39.121523   46126 cri.go:89] found id: ""
	I1128 00:48:39.121533   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:39.121594   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.126495   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:39.126554   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:39.168822   46126 cri.go:89] found id: ""
	I1128 00:48:39.168851   46126 logs.go:284] 0 containers: []
	W1128 00:48:39.168862   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:39.168869   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:39.168924   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:39.213834   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.213859   46126 cri.go:89] found id: ""
	I1128 00:48:39.213869   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:39.213914   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.218746   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:39.218772   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:39.232098   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:39.232127   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:39.373641   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:39.373674   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:39.451311   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:39.451349   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.498219   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:39.498247   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:39.952276   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:39.952314   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:40.008385   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:40.008425   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:40.052409   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:40.052443   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:40.092943   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:40.092978   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:40.135490   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:40.135520   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:40.189756   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:40.189793   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:40.242615   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:40.242643   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:37.415898   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:39.910954   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:42.802428   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:42.818606   46126 api_server.go:72] duration metric: took 4m14.508070703s to wait for apiserver process to appear ...
	I1128 00:48:42.818632   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:42.818667   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:42.818721   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:42.872566   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:42.872603   46126 cri.go:89] found id: ""
	I1128 00:48:42.872613   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:42.872675   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.878165   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:42.878232   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:42.924667   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:42.924689   46126 cri.go:89] found id: ""
	I1128 00:48:42.924699   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:42.924772   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.929748   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:42.929809   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:42.977787   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:42.977815   46126 cri.go:89] found id: ""
	I1128 00:48:42.977825   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:42.977887   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.982991   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:42.983071   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:43.032835   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.032866   46126 cri.go:89] found id: ""
	I1128 00:48:43.032876   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:43.032933   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.038635   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:43.038711   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:43.084051   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.084080   46126 cri.go:89] found id: ""
	I1128 00:48:43.084090   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:43.084161   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.088908   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:43.088976   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:43.130640   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.130666   46126 cri.go:89] found id: ""
	I1128 00:48:43.130676   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:43.130738   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.135354   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:43.135434   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:43.179655   46126 cri.go:89] found id: ""
	I1128 00:48:43.179690   46126 logs.go:284] 0 containers: []
	W1128 00:48:43.179699   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:43.179705   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:43.179770   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:43.228309   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.228335   46126 cri.go:89] found id: ""
	I1128 00:48:43.228343   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:43.228404   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.233343   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:43.233375   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:43.247396   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:43.247430   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:43.386131   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:43.386181   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:43.463228   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:43.463275   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.519469   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:43.519511   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.581402   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:43.581437   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:43.641804   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:43.641844   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:43.707768   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:43.707807   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:43.779636   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:43.779673   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:43.822939   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:43.822972   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.869304   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:43.869344   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.917500   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:43.917528   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:46.886551   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:48:46.892696   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:48:46.894400   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:46.894424   46126 api_server.go:131] duration metric: took 4.075784232s to wait for apiserver health ...
	I1128 00:48:46.894433   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:46.894455   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:46.894492   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:46.939259   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:46.939291   46126 cri.go:89] found id: ""
	I1128 00:48:46.939302   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:46.939364   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.946934   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:46.947012   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:46.989896   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:46.989920   46126 cri.go:89] found id: ""
	I1128 00:48:46.989930   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:46.989988   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.994923   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:46.994994   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:47.040298   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.040330   46126 cri.go:89] found id: ""
	I1128 00:48:47.040339   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:47.040396   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.045041   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:47.045113   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:47.093254   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.093282   46126 cri.go:89] found id: ""
	I1128 00:48:47.093290   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:47.093345   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.097856   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:47.097916   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:47.150763   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.150790   46126 cri.go:89] found id: ""
	I1128 00:48:47.150800   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:47.150855   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.155272   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:47.155348   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:47.203549   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.203586   46126 cri.go:89] found id: ""
	I1128 00:48:47.203600   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:47.203670   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.209313   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:47.209384   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:42.410241   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:44.909607   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:46.893894   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.438515297s)
	I1128 00:48:46.893965   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:46.909967   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:46.919457   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:46.928580   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:46.928629   45815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:46.989655   45815 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 00:48:46.989772   45815 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:47.162717   45815 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:47.162868   45815 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:47.163002   45815 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:47.453392   45815 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:47.455125   45815 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:47.455291   45815 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:47.455388   45815 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:47.455530   45815 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:47.455605   45815 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:47.456116   45815 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:47.456786   45815 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:47.457320   45815 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:47.457814   45815 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:47.458228   45815 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:47.458584   45815 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:47.458984   45815 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:47.459080   45815 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:47.654823   45815 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:47.858053   45815 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 00:48:48.006981   45815 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:48.256244   45815 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:48.381440   45815 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:48.381976   45815 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:48.384696   45815 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:48.386824   45815 out.go:204]   - Booting up control plane ...
	I1128 00:48:48.386943   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:48.387057   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:48.387155   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:48.404036   45815 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:48.408139   45815 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:48.408584   45815 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:48.539731   45815 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:47.259312   46126 cri.go:89] found id: ""
	I1128 00:48:47.259343   46126 logs.go:284] 0 containers: []
	W1128 00:48:47.259353   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:47.259361   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:47.259421   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:47.308650   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.308681   46126 cri.go:89] found id: ""
	I1128 00:48:47.308692   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:47.308764   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.313702   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:47.313727   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:47.327753   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:47.327788   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:47.490493   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:47.490525   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:47.554064   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:47.554097   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.604401   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:47.604433   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.643173   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:47.643211   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.707400   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:47.707432   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.763831   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:47.763860   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:47.817244   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:47.817278   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:47.872462   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:47.872499   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:47.930695   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:47.930729   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.987718   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:47.987748   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:50.856470   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:48:50.856510   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.856518   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.856525   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.856533   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.856539   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.856545   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.856558   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.856571   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.856579   46126 system_pods.go:74] duration metric: took 3.962140088s to wait for pod list to return data ...
	I1128 00:48:50.856589   46126 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:50.859308   46126 default_sa.go:45] found service account: "default"
	I1128 00:48:50.859338   46126 default_sa.go:55] duration metric: took 2.741136ms for default service account to be created ...
	I1128 00:48:50.859347   46126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:50.865347   46126 system_pods.go:86] 8 kube-system pods found
	I1128 00:48:50.865371   46126 system_pods.go:89] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.865377   46126 system_pods.go:89] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.865382   46126 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.865387   46126 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.865391   46126 system_pods.go:89] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.865395   46126 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.865405   46126 system_pods.go:89] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.865413   46126 system_pods.go:89] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.865425   46126 system_pods.go:126] duration metric: took 6.071837ms to wait for k8s-apps to be running ...
	I1128 00:48:50.865441   46126 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:50.865490   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:50.882729   46126 system_svc.go:56] duration metric: took 17.277766ms WaitForService to wait for kubelet.
	I1128 00:48:50.882767   46126 kubeadm.go:581] duration metric: took 4m22.572235871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:50.882796   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:50.886638   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:50.886671   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:50.886684   46126 node_conditions.go:105] duration metric: took 3.881703ms to run NodePressure ...
	I1128 00:48:50.886699   46126 start.go:228] waiting for startup goroutines ...
	I1128 00:48:50.886712   46126 start.go:233] waiting for cluster config update ...
	I1128 00:48:50.886725   46126 start.go:242] writing updated cluster config ...
	I1128 00:48:50.886995   46126 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:50.947562   46126 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:50.949119   46126 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-488423" cluster and "default" namespace by default
	I1128 00:48:47.419653   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:49.909410   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:51.909739   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:53.910387   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.408786   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.542000   45815 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002009 seconds
	I1128 00:48:56.567203   45815 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:56.583239   45815 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:57.114661   45815 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:57.114917   45815 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-473615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:57.633030   45815 kubeadm.go:322] [bootstrap-token] Using token: vz7ey4.v2qfoncp2ok7nh54
	I1128 00:48:57.634835   45815 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:57.634961   45815 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:57.640535   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:57.653911   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:57.658740   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:57.662927   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:57.667238   45815 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:57.688281   45815 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:57.949630   45815 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:58.055744   45815 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:58.057024   45815 kubeadm.go:322] 
	I1128 00:48:58.057159   45815 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:58.057179   45815 kubeadm.go:322] 
	I1128 00:48:58.057290   45815 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:58.057310   45815 kubeadm.go:322] 
	I1128 00:48:58.057343   45815 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:58.057431   45815 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:58.057518   45815 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:58.057536   45815 kubeadm.go:322] 
	I1128 00:48:58.057601   45815 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:58.057609   45815 kubeadm.go:322] 
	I1128 00:48:58.057673   45815 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:58.057678   45815 kubeadm.go:322] 
	I1128 00:48:58.057719   45815 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:58.057787   45815 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:58.057841   45815 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:58.057844   45815 kubeadm.go:322] 
	I1128 00:48:58.057921   45815 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:58.057987   45815 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:58.057991   45815 kubeadm.go:322] 
	I1128 00:48:58.058062   45815 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058148   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:58.058183   45815 kubeadm.go:322] 	--control-plane 
	I1128 00:48:58.058198   45815 kubeadm.go:322] 
	I1128 00:48:58.058266   45815 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:58.058272   45815 kubeadm.go:322] 
	I1128 00:48:58.058347   45815 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058449   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:58.059375   45815 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:58.059404   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:48:58.059415   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:58.061524   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:58.062981   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:58.121061   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:58.143978   45815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:58.144060   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.144068   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=no-preload-473615 minikube.k8s.io/updated_at=2023_11_28T00_48_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.495592   45815 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:58.495756   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.590073   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.412254   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:00.912329   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:59.189174   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:59.688440   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.189285   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.688724   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.189197   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.688512   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.189219   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.689235   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.189405   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.689243   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.414190   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:05.909164   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:04.188645   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:04.688928   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.189330   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.689126   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.189257   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.688476   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.189386   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.689051   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.188961   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.689080   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.188591   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.688502   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.188492   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.303728   45815 kubeadm.go:1081] duration metric: took 12.159747313s to wait for elevateKubeSystemPrivileges.
	I1128 00:49:10.303773   45815 kubeadm.go:406] StartCluster complete in 5m13.413969558s
	I1128 00:49:10.303794   45815 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.303880   45815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:49:10.306274   45815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.306559   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:49:10.306678   45815 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:49:10.306764   45815 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473615"
	I1128 00:49:10.306786   45815 addons.go:231] Setting addon storage-provisioner=true in "no-preload-473615"
	W1128 00:49:10.306799   45815 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:49:10.306822   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:49:10.306844   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.306903   45815 addons.go:69] Setting default-storageclass=true in profile "no-preload-473615"
	I1128 00:49:10.306924   45815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473615"
	I1128 00:49:10.307065   45815 addons.go:69] Setting metrics-server=true in profile "no-preload-473615"
	I1128 00:49:10.307089   45815 addons.go:231] Setting addon metrics-server=true in "no-preload-473615"
	W1128 00:49:10.307097   45815 addons.go:240] addon metrics-server should already be in state true
	I1128 00:49:10.307140   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.307283   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307284   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307366   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307313   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307600   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307650   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.323788   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1128 00:49:10.324333   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.324915   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.324940   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.325212   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I1128 00:49:10.325655   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.325825   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326138   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.326156   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.326346   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326375   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.326504   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326968   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326991   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.330263   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I1128 00:49:10.331124   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.331538   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.331559   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.331951   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.332131   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.335360   45815 addons.go:231] Setting addon default-storageclass=true in "no-preload-473615"
	W1128 00:49:10.335378   45815 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:49:10.335405   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.335685   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.335715   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.346750   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1128 00:49:10.346822   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I1128 00:49:10.347279   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347400   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347703   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347731   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347906   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347919   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347983   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348096   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.348232   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348429   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.350025   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.352544   45815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:49:10.350506   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.355541   45815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:49:10.354491   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:49:10.356963   45815 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.356980   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:49:10.356993   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.355570   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:49:10.357068   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.356139   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1128 00:49:10.356295   45815 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473615" context rescaled to 1 replicas
	I1128 00:49:10.357149   45815 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:49:10.358543   45815 out.go:177] * Verifying Kubernetes components...
	I1128 00:49:10.359926   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:10.357719   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.360555   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.360575   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.361020   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.361318   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361551   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.361574   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361736   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.361938   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.362037   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362129   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.362295   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.362317   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.362381   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.362676   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.362699   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362961   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.363188   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.363360   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.363499   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.381194   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1128 00:49:10.381543   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.382012   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.382032   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.382399   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.382584   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.384269   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.384500   45815 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.384513   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:49:10.384527   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.387448   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388000   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.388027   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388169   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.388335   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.388477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.388578   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.513157   45815 node_ready.go:35] waiting up to 6m0s for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.513251   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:49:10.546158   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.566225   45815 node_ready.go:49] node "no-preload-473615" has status "Ready":"True"
	I1128 00:49:10.566248   45815 node_ready.go:38] duration metric: took 53.063342ms waiting for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.566259   45815 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:10.589374   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:49:10.589400   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:49:10.608085   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.657717   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:49:10.657746   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:49:10.693300   45815 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.745796   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.745821   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:49:10.820139   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.848411   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:10.848444   45815 pod_ready.go:81] duration metric: took 155.116855ms waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.848459   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035904   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.035929   45815 pod_ready.go:81] duration metric: took 187.461745ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035941   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.269000   45815 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1128 00:49:11.634167   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.087967346s)
	I1128 00:49:11.634213   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026096699s)
	I1128 00:49:11.634226   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634239   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634250   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634272   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634578   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634621   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634637   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634639   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634649   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634650   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634656   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634660   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634595   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634942   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634958   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634986   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635009   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634989   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635049   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.657473   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.657495   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.657814   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.657828   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.758491   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.758514   45815 pod_ready.go:81] duration metric: took 722.565796ms waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.758525   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:12.084449   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.264259029s)
	I1128 00:49:12.084510   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084524   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.084846   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.084865   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.084875   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084870   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.084885   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.085142   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.085152   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.085164   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.085174   45815 addons.go:467] Verifying addon metrics-server=true in "no-preload-473615"
	I1128 00:49:12.087081   45815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:49:08.409321   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:10.909836   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:12.088572   45815 addons.go:502] enable addons completed in 1.781896775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:49:13.830651   45815 pod_ready.go:102] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:14.830780   45815 pod_ready.go:92] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.830805   45815 pod_ready.go:81] duration metric: took 3.072274458s waiting for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.830815   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836248   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.836266   45815 pod_ready.go:81] duration metric: took 5.444378ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836273   45815 pod_ready.go:38] duration metric: took 4.270002588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:14.836288   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:49:14.836329   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:49:14.860322   45815 api_server.go:72] duration metric: took 4.503144983s to wait for apiserver process to appear ...
	I1128 00:49:14.860354   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:49:14.860375   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:49:14.866977   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:49:14.868294   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:49:14.868318   45815 api_server.go:131] duration metric: took 7.955565ms to wait for apiserver health ...
	I1128 00:49:14.868328   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:49:14.875943   45815 system_pods.go:59] 8 kube-system pods found
	I1128 00:49:14.875972   45815 system_pods.go:61] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:14.875979   45815 system_pods.go:61] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:14.875986   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:14.875993   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:14.875999   45815 system_pods.go:61] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:14.876005   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:14.876019   45815 system_pods.go:61] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:14.876031   45815 system_pods.go:61] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:14.876042   45815 system_pods.go:74] duration metric: took 7.70749ms to wait for pod list to return data ...
	I1128 00:49:14.876058   45815 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:49:14.918080   45815 default_sa.go:45] found service account: "default"
	I1128 00:49:14.918107   45815 default_sa.go:55] duration metric: took 42.036279ms for default service account to be created ...
	I1128 00:49:14.918119   45815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:49:15.120338   45815 system_pods.go:86] 8 kube-system pods found
	I1128 00:49:15.120368   45815 system_pods.go:89] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:15.120376   45815 system_pods.go:89] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:15.120383   45815 system_pods.go:89] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:15.120390   45815 system_pods.go:89] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:15.120395   45815 system_pods.go:89] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:15.120401   45815 system_pods.go:89] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:15.120413   45815 system_pods.go:89] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:15.120420   45815 system_pods.go:89] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:15.120437   45815 system_pods.go:126] duration metric: took 202.310611ms to wait for k8s-apps to be running ...
	I1128 00:49:15.120452   45815 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:49:15.120501   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:15.134858   45815 system_svc.go:56] duration metric: took 14.396652ms WaitForService to wait for kubelet.
	I1128 00:49:15.134886   45815 kubeadm.go:581] duration metric: took 4.777716544s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:49:15.134902   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:49:15.318344   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:49:15.318370   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:49:15.318380   45815 node_conditions.go:105] duration metric: took 183.473974ms to run NodePressure ...
	I1128 00:49:15.318390   45815 start.go:228] waiting for startup goroutines ...
	I1128 00:49:15.318396   45815 start.go:233] waiting for cluster config update ...
	I1128 00:49:15.318405   45815 start.go:242] writing updated cluster config ...
	I1128 00:49:15.318651   45815 ssh_runner.go:195] Run: rm -f paused
	I1128 00:49:15.368036   45815 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 00:49:15.369853   45815 out.go:177] * Done! kubectl is now configured to use "no-preload-473615" cluster and "default" namespace by default
	I1128 00:49:12.909910   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:15.420062   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:17.421038   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:19.909444   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:21.910293   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:24.412962   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:26.908733   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:28.910353   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:31.104114   45269 pod_ready.go:81] duration metric: took 4m0.000750315s waiting for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	E1128 00:49:31.104164   45269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:49:31.104219   45269 pod_ready.go:38] duration metric: took 4m1.201800344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:31.104258   45269 kubeadm.go:640] restartCluster took 5m3.38216869s
	W1128 00:49:31.104338   45269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:49:31.104371   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:49:35.883236   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.778829992s)
	I1128 00:49:35.883312   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:35.898846   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:49:35.910716   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:49:35.921838   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:49:35.921883   45269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 00:49:35.987683   45269 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 00:49:35.987889   45269 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:49:36.153771   45269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:49:36.153926   45269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:49:36.154056   45269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:49:36.387112   45269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:49:36.387236   45269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:49:36.394929   45269 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 00:49:36.523951   45269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:49:36.526180   45269 out.go:204]   - Generating certificates and keys ...
	I1128 00:49:36.526284   45269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:49:36.526378   45269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:49:36.526508   45269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:49:36.526603   45269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:49:36.526723   45269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:49:36.526807   45269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:49:36.526928   45269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:49:36.527026   45269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:49:36.527127   45269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:49:36.527671   45269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:49:36.527734   45269 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:49:36.527807   45269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:49:36.966756   45269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:49:37.138717   45269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:49:37.307916   45269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:49:37.374115   45269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:49:37.375393   45269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:49:37.377224   45269 out.go:204]   - Booting up control plane ...
	I1128 00:49:37.377338   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:49:37.381887   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:49:37.383114   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:49:37.384032   45269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:49:37.387460   45269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:49:47.893342   45269 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504508 seconds
	I1128 00:49:47.893497   45269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:49:47.911409   45269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:49:48.437988   45269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:49:48.438226   45269 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-732472 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 00:49:48.947631   45269 kubeadm.go:322] [bootstrap-token] Using token: g2kx2b.r3qu6fui94rrmu2m
	I1128 00:49:48.949581   45269 out.go:204]   - Configuring RBAC rules ...
	I1128 00:49:48.949746   45269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:49:48.960004   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:49:48.969068   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:49:48.973998   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:49:48.982331   45269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:49:49.099721   45269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:49:49.367382   45269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:49:49.369069   45269 kubeadm.go:322] 
	I1128 00:49:49.369159   45269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:49:49.369196   45269 kubeadm.go:322] 
	I1128 00:49:49.369325   45269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:49:49.369339   45269 kubeadm.go:322] 
	I1128 00:49:49.369383   45269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:49:49.369449   45269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:49:49.369519   45269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:49:49.369541   45269 kubeadm.go:322] 
	I1128 00:49:49.369619   45269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:49:49.369725   45269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:49:49.369822   45269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:49:49.369839   45269 kubeadm.go:322] 
	I1128 00:49:49.369975   45269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 00:49:49.370080   45269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:49:49.370092   45269 kubeadm.go:322] 
	I1128 00:49:49.370202   45269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370371   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:49:49.370419   45269 kubeadm.go:322]     --control-plane 	  
	I1128 00:49:49.370432   45269 kubeadm.go:322] 
	I1128 00:49:49.370515   45269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:49:49.370527   45269 kubeadm.go:322] 
	I1128 00:49:49.370639   45269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370783   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:49:49.371106   45269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:49:49.371134   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:49:49.371148   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:49:49.373008   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:49:49.374371   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:49:49.384861   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:49:49.402517   45269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:49:49.402582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.402598   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=old-k8s-version-732472 minikube.k8s.io/updated_at=2023_11_28T00_49_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.441523   45269 ops.go:34] apiserver oom_adj: -16
	I1128 00:49:49.674343   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.796920   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.420537   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.920042   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.420533   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.920538   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.420730   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.920078   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.420670   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.920876   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.420798   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.920702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.420180   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.920033   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.420702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.920106   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.420244   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.920637   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.420226   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.920874   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.420228   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.920070   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.420845   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.920883   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.420977   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.920275   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.420097   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.920582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.420001   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.919906   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.420071   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.580992   45269 kubeadm.go:1081] duration metric: took 15.178468662s to wait for elevateKubeSystemPrivileges.
	I1128 00:50:04.581023   45269 kubeadm.go:406] StartCluster complete in 5m36.912120738s
	I1128 00:50:04.581042   45269 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.581125   45269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:50:04.582704   45269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.582966   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:50:04.583000   45269 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:50:04.583077   45269 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583105   45269 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-732472"
	W1128 00:50:04.583116   45269 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:50:04.583192   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583206   45269 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583227   45269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-732472"
	I1128 00:50:04.583540   45269 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583565   45269 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-732472"
	W1128 00:50:04.583573   45269 addons.go:240] addon metrics-server should already be in state true
	I1128 00:50:04.583609   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583635   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583640   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583676   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583643   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583193   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:50:04.584015   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.584069   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.602419   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I1128 00:50:04.602558   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I1128 00:50:04.602646   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I1128 00:50:04.603020   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603118   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603196   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603571   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603572   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603597   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603611   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603729   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603753   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603939   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.603973   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604086   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.604489   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604521   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.604617   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604646   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.608900   45269 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-732472"
	W1128 00:50:04.608925   45269 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:50:04.608953   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.611555   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.611628   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.622409   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33595
	I1128 00:50:04.622446   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1128 00:50:04.622876   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623000   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623394   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623424   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623534   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623567   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623886   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624365   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624368   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.624556   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.626412   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.626443   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.629006   45269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:50:04.630723   45269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:50:04.632378   45269 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.632395   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:50:04.632409   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.630641   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:50:04.632467   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:50:04.632479   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.632126   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I1128 00:50:04.633062   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.633666   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.633692   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.634447   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.635020   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.635053   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.636332   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636387   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636733   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636772   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636795   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636830   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636952   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637085   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637132   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637245   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637296   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637413   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637448   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.637594   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.651941   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1128 00:50:04.652604   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.653192   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.653222   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.653677   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.653838   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.655532   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.655848   45269 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.655868   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:50:04.655890   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.658852   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659252   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.659280   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659426   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.659602   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.659971   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.660096   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	W1128 00:50:04.792826   45269 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-732472" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1128 00:50:04.792863   45269 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1128 00:50:04.792890   45269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:50:04.795799   45269 out.go:177] * Verifying Kubernetes components...
	I1128 00:50:04.797469   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:50:04.870889   45269 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.871024   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:50:04.888333   45269 node_ready.go:49] node "old-k8s-version-732472" has status "Ready":"True"
	I1128 00:50:04.888359   45269 node_ready.go:38] duration metric: took 17.44205ms waiting for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.888372   45269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:04.899414   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.902681   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:04.904708   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:50:04.904734   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:50:04.947930   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.977094   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:50:04.977123   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:50:05.195712   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:05.195795   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:50:05.292058   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:06.383144   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.512083846s)
	I1128 00:50:06.383170   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.483727542s)
	I1128 00:50:06.383180   45269 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 00:50:06.383208   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383572   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383599   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383608   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383606   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.383618   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383835   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383851   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383870   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.423407   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.423447   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.423758   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.423783   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.423799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.678261   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.73029562s)
	I1128 00:50:06.678312   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678326   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678640   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678655   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.678663   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678672   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678927   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678955   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762082   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46997729s)
	I1128 00:50:06.762140   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762160   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762538   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762557   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762569   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762579   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762599   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.762815   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762830   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762840   45269 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-732472"
	I1128 00:50:06.765825   45269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:50:06.767637   45269 addons.go:502] enable addons completed in 2.184637132s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:50:06.959495   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:08.961160   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:11.459984   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:12.959294   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.959317   45269 pod_ready.go:81] duration metric: took 8.056612005s waiting for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.959326   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973244   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.973268   45269 pod_ready.go:81] duration metric: took 13.936307ms waiting for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973278   45269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980471   45269 pod_ready.go:92] pod "kube-proxy-88chq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.980489   45269 pod_ready.go:81] duration metric: took 7.20414ms waiting for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980496   45269 pod_ready.go:38] duration metric: took 8.092113593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:12.980511   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:50:12.980554   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:50:12.996604   45269 api_server.go:72] duration metric: took 8.203675443s to wait for apiserver process to appear ...
	I1128 00:50:12.996645   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:50:12.996670   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:50:13.006987   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:50:13.007986   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:50:13.008003   45269 api_server.go:131] duration metric: took 11.352257ms to wait for apiserver health ...
	I1128 00:50:13.008010   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:50:13.013658   45269 system_pods.go:59] 5 kube-system pods found
	I1128 00:50:13.013677   45269 system_pods.go:61] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.013682   45269 system_pods.go:61] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.013686   45269 system_pods.go:61] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.013693   45269 system_pods.go:61] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.013697   45269 system_pods.go:61] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.013703   45269 system_pods.go:74] duration metric: took 5.688575ms to wait for pod list to return data ...
	I1128 00:50:13.013710   45269 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:50:13.016210   45269 default_sa.go:45] found service account: "default"
	I1128 00:50:13.016228   45269 default_sa.go:55] duration metric: took 2.513069ms for default service account to be created ...
	I1128 00:50:13.016234   45269 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:50:13.020464   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.020488   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.020496   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.020502   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.020513   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.020522   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.020544   45269 retry.go:31] will retry after 244.092512ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.270858   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.270893   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.270901   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.270907   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.270918   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.270926   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.270946   45269 retry.go:31] will retry after 311.602199ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.588013   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.588041   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.588047   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.588051   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.588057   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.588062   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.588076   45269 retry.go:31] will retry after 298.08088ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.891272   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.891302   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.891307   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.891311   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.891318   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.891323   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.891339   45269 retry.go:31] will retry after 474.390305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:14.371201   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:14.371230   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:14.371236   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:14.371241   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:14.371248   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:14.371253   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:14.371269   45269 retry.go:31] will retry after 719.510586ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.096817   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.096846   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.096851   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.096855   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.096862   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.096866   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.096881   45269 retry.go:31] will retry after 684.457384ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.786918   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.786947   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.786952   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.786956   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.786962   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.786967   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.786982   45269 retry.go:31] will retry after 721.543291ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:16.513230   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:16.513258   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:16.513263   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:16.513268   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:16.513275   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:16.513280   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:16.513296   45269 retry.go:31] will retry after 1.405502561s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:17.926572   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:17.926610   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:17.926619   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:17.926626   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:17.926636   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:17.926642   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:17.926662   45269 retry.go:31] will retry after 1.65088536s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:19.584099   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:19.584130   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:19.584136   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:19.584140   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:19.584147   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:19.584152   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:19.584168   45269 retry.go:31] will retry after 1.660488369s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:21.250659   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:21.250706   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:21.250714   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:21.250719   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:21.250729   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:21.250736   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:21.250757   45269 retry.go:31] will retry after 1.762203818s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:23.018771   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:23.018798   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:23.018804   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:23.018808   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:23.018815   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:23.018819   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:23.018837   45269 retry.go:31] will retry after 2.558255345s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:25.584363   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:25.584394   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:25.584402   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:25.584409   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:25.584417   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:25.584422   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:25.584446   45269 retry.go:31] will retry after 4.457632402s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:30.049343   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:30.049374   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:30.049381   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:30.049388   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:30.049398   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:30.049406   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:30.049426   45269 retry.go:31] will retry after 5.077489821s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:35.133974   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:35.134001   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:35.134006   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:35.134010   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:35.134022   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:35.134029   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:35.134048   45269 retry.go:31] will retry after 5.675627515s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:40.814779   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:40.814808   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:40.814814   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:40.814818   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:40.814825   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:40.814829   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:40.814846   45269 retry.go:31] will retry after 5.701774609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:46.524426   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:46.524467   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:46.524475   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:46.524482   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:46.524492   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:46.524499   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:46.524521   45269 retry.go:31] will retry after 7.322045517s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:53.852348   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:53.852378   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:53.852387   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:53.852394   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:53.852406   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:53.852413   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:53.852442   45269 retry.go:31] will retry after 12.532542473s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:06.392828   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:06.392858   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:06.392863   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:06.392872   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Pending
	I1128 00:51:06.392876   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Pending
	I1128 00:51:06.392882   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Pending
	I1128 00:51:06.392886   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:06.392889   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Pending
	I1128 00:51:06.392897   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:06.392901   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:06.392915   45269 retry.go:31] will retry after 10.519018157s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:16.918264   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:16.918303   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:16.918311   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:16.918319   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Running
	I1128 00:51:16.918326   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Running
	I1128 00:51:16.918333   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Running
	I1128 00:51:16.918340   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:16.918346   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Running
	I1128 00:51:16.918360   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:16.918375   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:16.918386   45269 system_pods.go:126] duration metric: took 1m3.902146285s to wait for k8s-apps to be running ...
	I1128 00:51:16.918398   45269 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:51:16.918445   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:51:16.937522   45269 system_svc.go:56] duration metric: took 19.116204ms WaitForService to wait for kubelet.
	I1128 00:51:16.937556   45269 kubeadm.go:581] duration metric: took 1m12.144633009s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:51:16.937577   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:51:16.941812   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:51:16.941838   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:51:16.941849   45269 node_conditions.go:105] duration metric: took 4.264769ms to run NodePressure ...
	I1128 00:51:16.941859   45269 start.go:228] waiting for startup goroutines ...
	I1128 00:51:16.941865   45269 start.go:233] waiting for cluster config update ...
	I1128 00:51:16.941874   45269 start.go:242] writing updated cluster config ...
	I1128 00:51:16.942150   45269 ssh_runner.go:195] Run: rm -f paused
	I1128 00:51:16.992567   45269 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 00:51:16.994677   45269 out.go:177] 
	W1128 00:51:16.996083   45269 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 00:51:16.997442   45269 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 00:51:16.998644   45269 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-732472" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:43:06 UTC, ends at Tue 2023-11-28 00:57:39 UTC. --
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.807505941Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1701132515982653627,StartedAt:1701132516083891237,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w5ct2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ac66db-fe8d-419d-9237-b0dd4077559a,},Annotations:map[string]string{io.kubernetes.container.hash: 52abbc6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b3ac66db-fe8d-419d-9237-b0dd4077559a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b3ac66db-fe8d-419d-9237-b0dd4077559a/containers/kube-proxy/47996ac1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kubelet/pods/b3ac66db-fe8d-419d-9237-b0dd4077559a/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io
/serviceaccount,HostPath:/var/lib/kubelet/pods/b3ac66db-fe8d-419d-9237-b0dd4077559a/volumes/kubernetes.io~projected/kube-api-access-hjqwl,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-proxy-w5ct2_b3ac66db-fe8d-419d-9237-b0dd4077559a/kube-proxy/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=68897853-d041-49a9-b679-f71e5be7d4bf name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.808232531Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=4d78fdcf-f1f8-473b-beb9-c849f9fe2f3e name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.808391928Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1701132515405139646,StartedAt:1701132515477758062,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kjg5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf956dfb-3a7f-4605-a849-ee887562fce5,},Annotations:map[string]string{io.kubernetes.container.hash: 8a70d9e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/bf956dfb-3a7f-4605-a849-ee887562fce5/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/bf956dfb-3a7f-4605-a849-ee887562fce5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/bf956dfb-3a7f-4605-a849-ee887562fce5/containers/coredns/e7504cf6,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/
lib/kubelet/pods/bf956dfb-3a7f-4605-a849-ee887562fce5/volumes/kubernetes.io~projected/kube-api-access-4rjhl,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-kjg5f_bf956dfb-3a7f-4605-a849-ee887562fce5/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=4d78fdcf-f1f8-473b-beb9-c849f9fe2f3e name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.809224135Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=535b0a26-5e4a-46de-97e7-1abd779bad71 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.809358991Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1701132491733238519,StartedAt:1701132492778849153,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d739abfe9178a563e914606688626e19,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d739abfe9178a563e914606688626e19/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d739abfe9178a563e914606688626e19/containers/kube-scheduler/d69305b5,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-304541_d739abfe9178a563e914606688626e19/kube-scheduler/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=535b0a26-5e4a-46de-97e7-1abd779bad71 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.810103433Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=7dbc7590-1ffe-4247-84c2-0e42dec2ed87 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.810214171Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1701132491506199509,StartedAt:1701132493120544999,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436d4e334a24347cc5d0fc652c17ba7b,},Annotations:map[string]string{io.kubernetes.container.hash: 1587da43,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/436d4e334a24347cc5d0fc652c17ba7b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/436d4e334a24347cc5d0fc652c17ba7b/containers/etcd/dbc0560a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-embed-certs-304541_436d4e334a24347cc5d0fc652c17ba7b/etcd/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=7dbc7590-1ffe-4247-84c2-0e42dec2ed87 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.810712527Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=df644f16-14ec-4647-a964-1ffbe5ba05b4 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.810815180Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1701132491203480650,StartedAt:1701132491970968431,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb67ef96df179669e13da188205336d,},Annotations:map[string]string{io.kubernetes.container.hash: 1a41d4de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/adb67ef96df179669e13da188205336d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/adb67ef96df179669e13da188205336d/containers/kube-apiserver/15ef5b7e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-304541_adb67ef96
df179669e13da188205336d/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=df644f16-14ec-4647-a964-1ffbe5ba05b4 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.811388518Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=bd360292-4384-4141-84ee-0f010df429ea name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.811493038Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1701132491159304374,StartedAt:1701132492249406604,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338329e9c8fedff2d5801572cdf8d155,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/338329e9c8fedff2d5801572cdf8d155/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/338329e9c8fedff2d5801572cdf8d155/containers/kube-controller-manager/e0c9bf87,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRI
VATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-embed-certs-304541_338329e9c8fedff2d5801572cdf8d155/kube-controller-manager/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=bd360292-4384-4141-84ee-0f010df429ea name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.818537503Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=126ba254-22e2-4580-b048-040498047493 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.818602592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=126ba254-22e2-4580-b048-040498047493 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.819444362Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=eb6f6ef4-195a-4715-a0de-7ccb5ef42876 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.819807864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133059819796547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=eb6f6ef4-195a-4715-a0de-7ccb5ef42876 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.820336206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e9981094-f861-4ac3-aaa3-61727baed0a0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.820401609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e9981094-f861-4ac3-aaa3-61727baed0a0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.820553574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3fc8bf06b33b6bd3855dc00fec4b678dda94fa211e2dd4538bc17ab34dbf4a1,PodSandboxId:a33478695b934c5d6364b0e621311747cb2966464b2dadb0b11b02937af5e152,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132516459194822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c62a8419-b0e5-4330-a49b-986693e183b2,},Annotations:map[string]string{io.kubernetes.container.hash: 19218868,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f,PodSandboxId:fc135d72f59edc39c2517d19f324e2783df33a5a7c25f81324c36a2c774e041f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132515822591632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w5ct2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ac66db-fe8d-419d-9237-b0dd4077559a,},Annotations:map[string]string{io.kubernetes.container.hash: 52abbc6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f,PodSandboxId:d664a9b1c7f81feeb7bfc090d473b4be136da266d388d5f76051731b5cc92b34,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132515168481483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kjg5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf956dfb-3a7f-4605-a849-ee887562fce5,},Annotations:map[string]string{io.kubernetes.container.hash: 8a70d9e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd,PodSandboxId:45bd0bcfb925db9d80582ec505cc9ed0a1c586eae6c418ddd0fe4c29356def77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132491472482658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d739abfe9178a563e914606688626e19,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c,PodSandboxId:404ce39b26f1718910cc1467ee65993f8ab47320b28162f232ffa82042f1535a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132491329027379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436d4e334a24347cc5d0fc652c17ba7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 1587da43,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f,PodSandboxId:0d95a526bcbfdd92225e6f2efcfc0060f71a8d296153ea7f8958a733963c0d2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132491049881693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb67ef96df179669e13da188205336d,},Annotations:map[string
]string{io.kubernetes.container.hash: 1a41d4de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed,PodSandboxId:7d666e1d1d3873fdc338af2cddf27dfd4296c2287639cc41a3009a39a18c8243,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132490996328475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338329e9c8fedff2d5801572cdf8d15
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e9981094-f861-4ac3-aaa3-61727baed0a0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.855858261Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=612de4fc-85cb-4058-925f-5221c3427256 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.855947065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=612de4fc-85cb-4058-925f-5221c3427256 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.856963226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=461304de-8e5c-4082-a15d-47917de85726 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.857436349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133059857422771,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=461304de-8e5c-4082-a15d-47917de85726 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.857906549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3638df98-43dd-4a17-8892-bd6cef7dc193 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.857981874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3638df98-43dd-4a17-8892-bd6cef7dc193 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:39 embed-certs-304541 crio[718]: time="2023-11-28 00:57:39.858244322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3fc8bf06b33b6bd3855dc00fec4b678dda94fa211e2dd4538bc17ab34dbf4a1,PodSandboxId:a33478695b934c5d6364b0e621311747cb2966464b2dadb0b11b02937af5e152,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132516459194822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c62a8419-b0e5-4330-a49b-986693e183b2,},Annotations:map[string]string{io.kubernetes.container.hash: 19218868,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f,PodSandboxId:fc135d72f59edc39c2517d19f324e2783df33a5a7c25f81324c36a2c774e041f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132515822591632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w5ct2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ac66db-fe8d-419d-9237-b0dd4077559a,},Annotations:map[string]string{io.kubernetes.container.hash: 52abbc6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f,PodSandboxId:d664a9b1c7f81feeb7bfc090d473b4be136da266d388d5f76051731b5cc92b34,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132515168481483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kjg5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf956dfb-3a7f-4605-a849-ee887562fce5,},Annotations:map[string]string{io.kubernetes.container.hash: 8a70d9e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd,PodSandboxId:45bd0bcfb925db9d80582ec505cc9ed0a1c586eae6c418ddd0fe4c29356def77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132491472482658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d739abfe9178a563e914606688626e19,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c,PodSandboxId:404ce39b26f1718910cc1467ee65993f8ab47320b28162f232ffa82042f1535a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132491329027379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436d4e334a24347cc5d0fc652c17ba7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 1587da43,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f,PodSandboxId:0d95a526bcbfdd92225e6f2efcfc0060f71a8d296153ea7f8958a733963c0d2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132491049881693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb67ef96df179669e13da188205336d,},Annotations:map[string
]string{io.kubernetes.container.hash: 1a41d4de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed,PodSandboxId:7d666e1d1d3873fdc338af2cddf27dfd4296c2287639cc41a3009a39a18c8243,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132490996328475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338329e9c8fedff2d5801572cdf8d15
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3638df98-43dd-4a17-8892-bd6cef7dc193 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e3fc8bf06b33b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a33478695b934       storage-provisioner
	6511a68179cfc       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   fc135d72f59ed       kube-proxy-w5ct2
	e59d10fb9061b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   d664a9b1c7f81       coredns-5dd5756b68-kjg5f
	bcbd9b61aa21b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   45bd0bcfb925d       kube-scheduler-embed-certs-304541
	9eef8dc0f07ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   404ce39b26f17       etcd-embed-certs-304541
	83b4ead516cfc       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   0d95a526bcbfd       kube-apiserver-embed-certs-304541
	c6c2dc2b090d3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   7d666e1d1d387       kube-controller-manager-embed-certs-304541
	
	* 
	* ==> coredns [e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33774 - 6383 "HINFO IN 7828742696619454455.1271914779107748957. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022305153s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-304541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-304541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=embed-certs-304541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_48_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:48:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-304541
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 00:57:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:53:46 +0000   Tue, 28 Nov 2023 00:48:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:53:46 +0000   Tue, 28 Nov 2023 00:48:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:53:46 +0000   Tue, 28 Nov 2023 00:48:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:53:46 +0000   Tue, 28 Nov 2023 00:48:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.93
	  Hostname:    embed-certs-304541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 da1ff02b83bc434190c2ec845a961bf6
	  System UUID:                da1ff02b-83bc-4341-90c2-ec845a961bf6
	  Boot ID:                    07a8ef9b-7aeb-4f02-abdc-d4b060d69676
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-kjg5f                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-embed-certs-304541                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-304541             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-304541    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-w5ct2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-embed-certs-304541             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-xzz2t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m31s (x8 over 9m31s)  kubelet          Node embed-certs-304541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m31s (x8 over 9m31s)  kubelet          Node embed-certs-304541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m31s (x7 over 9m31s)  kubelet          Node embed-certs-304541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node embed-certs-304541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node embed-certs-304541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node embed-certs-304541 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node embed-certs-304541 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s                  kubelet          Node embed-certs-304541 status is now: NodeReady
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m10s                  node-controller  Node embed-certs-304541 event: Registered Node embed-certs-304541 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 00:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068406] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov28 00:43] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.404011] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147180] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000008] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.399396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.605731] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.124990] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.153846] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.109384] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.216137] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +16.978013] systemd-fstab-generator[918]: Ignoring "noauto" for root device
	[ +20.230386] kauditd_printk_skb: 29 callbacks suppressed
	[Nov28 00:48] systemd-fstab-generator[3509]: Ignoring "noauto" for root device
	[  +9.781826] systemd-fstab-generator[3829]: Ignoring "noauto" for root device
	[ +13.478507] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.110837] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c] <==
	* {"level":"info","ts":"2023-11-28T00:48:13.387559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6054ef008d6c33e2 switched to configuration voters=(6941435711336494050)"}
	{"level":"info","ts":"2023-11-28T00:48:13.38767Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6aa3d10fec64a3ba","local-member-id":"6054ef008d6c33e2","added-peer-id":"6054ef008d6c33e2","added-peer-peer-urls":["https://192.168.50.93:2380"]}
	{"level":"info","ts":"2023-11-28T00:48:13.389514Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T00:48:13.389626Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.93:2380"}
	{"level":"info","ts":"2023-11-28T00:48:13.389785Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.93:2380"}
	{"level":"info","ts":"2023-11-28T00:48:13.392687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6054ef008d6c33e2","initial-advertise-peer-urls":["https://192.168.50.93:2380"],"listen-peer-urls":["https://192.168.50.93:2380"],"advertise-client-urls":["https://192.168.50.93:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.93:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T00:48:13.392739Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T00:48:13.945903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6054ef008d6c33e2 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:13.945972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6054ef008d6c33e2 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:13.94599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6054ef008d6c33e2 received MsgPreVoteResp from 6054ef008d6c33e2 at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:13.946002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6054ef008d6c33e2 became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:13.946007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6054ef008d6c33e2 received MsgVoteResp from 6054ef008d6c33e2 at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:13.946015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6054ef008d6c33e2 became leader at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:13.946093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6054ef008d6c33e2 elected leader 6054ef008d6c33e2 at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:13.947534Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6054ef008d6c33e2","local-member-attributes":"{Name:embed-certs-304541 ClientURLs:[https://192.168.50.93:2379]}","request-path":"/0/members/6054ef008d6c33e2/attributes","cluster-id":"6aa3d10fec64a3ba","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T00:48:13.947587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:48:13.948172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:48:13.948839Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T00:48:13.949211Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:13.949386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.93:2379"}
	{"level":"info","ts":"2023-11-28T00:48:13.950267Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6aa3d10fec64a3ba","local-member-id":"6054ef008d6c33e2","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:13.950383Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:13.950403Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:13.950422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T00:48:13.950429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:57:40 up 14 min,  0 users,  load average: 0.05, 0.19, 0.18
	Linux embed-certs-304541 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f] <==
	* W1128 00:53:16.791177       1 handler_proxy.go:93] no RequestInfo found in the context
	W1128 00:53:16.791324       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:53:16.791503       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:53:16.791513       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1128 00:53:16.791351       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:53:16.792558       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:54:15.714566       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 00:54:16.791761       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:54:16.791921       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:54:16.791955       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:54:16.793135       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:54:16.793211       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:54:16.793248       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:55:15.715165       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1128 00:56:15.714725       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 00:56:16.792339       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:56:16.792492       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:56:16.792504       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:56:16.793769       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:56:16.793833       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:56:16.793845       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:57:15.714822       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed] <==
	* I1128 00:52:04.199443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="100.517µs"
	E1128 00:52:30.954818       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:52:31.387592       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:53:00.961329       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:53:01.397552       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:53:30.968942       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:53:31.410137       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:54:00.975424       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:54:01.418211       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:54:30.981483       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:54:31.428650       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 00:54:35.203498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="518.929µs"
	I1128 00:54:50.199738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="97.077µs"
	E1128 00:55:00.986769       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:55:01.438266       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:55:30.993346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:55:31.447746       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:56:00.999383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:56:01.458635       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:56:31.005581       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:56:31.468938       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:57:01.011649       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:57:01.477430       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:57:31.018778       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:57:31.486300       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f] <==
	* I1128 00:48:36.608546       1 server_others.go:69] "Using iptables proxy"
	I1128 00:48:36.637875       1 node.go:141] Successfully retrieved node IP: 192.168.50.93
	I1128 00:48:36.742316       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 00:48:36.742544       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 00:48:36.750635       1 server_others.go:152] "Using iptables Proxier"
	I1128 00:48:36.751138       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 00:48:36.752334       1 server.go:846] "Version info" version="v1.28.4"
	I1128 00:48:36.752378       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:48:36.758108       1 config.go:188] "Starting service config controller"
	I1128 00:48:36.758125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 00:48:36.758246       1 config.go:97] "Starting endpoint slice config controller"
	I1128 00:48:36.758252       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 00:48:36.758855       1 config.go:315] "Starting node config controller"
	I1128 00:48:36.758866       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 00:48:36.859358       1 shared_informer.go:318] Caches are synced for service config
	I1128 00:48:36.859608       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 00:48:36.860293       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd] <==
	* W1128 00:48:15.871468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 00:48:15.871510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 00:48:15.871699       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 00:48:15.871732       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:48:16.693324       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 00:48:16.693438       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 00:48:16.766236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 00:48:16.766293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 00:48:16.823319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1128 00:48:16.823387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1128 00:48:16.835702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 00:48:16.835832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1128 00:48:16.854618       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 00:48:16.854743       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:48:16.875626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:48:16.875772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 00:48:16.968221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:16.968352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:17.014483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 00:48:17.014536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1128 00:48:17.052162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:48:17.052249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 00:48:17.112364       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:17.112674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1128 00:48:19.247226       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:43:06 UTC, ends at Tue 2023-11-28 00:57:40 UTC. --
	Nov 28 00:54:50 embed-certs-304541 kubelet[3836]: E1128 00:54:50.182625    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:55:04 embed-certs-304541 kubelet[3836]: E1128 00:55:04.182163    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:55:19 embed-certs-304541 kubelet[3836]: E1128 00:55:19.183378    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:55:19 embed-certs-304541 kubelet[3836]: E1128 00:55:19.266317    3836 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:55:19 embed-certs-304541 kubelet[3836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:55:19 embed-certs-304541 kubelet[3836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:55:19 embed-certs-304541 kubelet[3836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:55:33 embed-certs-304541 kubelet[3836]: E1128 00:55:33.182335    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:55:47 embed-certs-304541 kubelet[3836]: E1128 00:55:47.182857    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:55:59 embed-certs-304541 kubelet[3836]: E1128 00:55:59.182679    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:56:12 embed-certs-304541 kubelet[3836]: E1128 00:56:12.181433    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:56:19 embed-certs-304541 kubelet[3836]: E1128 00:56:19.263313    3836 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:56:19 embed-certs-304541 kubelet[3836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:56:19 embed-certs-304541 kubelet[3836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:56:19 embed-certs-304541 kubelet[3836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:56:25 embed-certs-304541 kubelet[3836]: E1128 00:56:25.182540    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:56:36 embed-certs-304541 kubelet[3836]: E1128 00:56:36.181945    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:56:51 embed-certs-304541 kubelet[3836]: E1128 00:56:51.182027    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:57:06 embed-certs-304541 kubelet[3836]: E1128 00:57:06.182885    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:57:19 embed-certs-304541 kubelet[3836]: E1128 00:57:19.262478    3836 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:57:19 embed-certs-304541 kubelet[3836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:57:19 embed-certs-304541 kubelet[3836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:57:19 embed-certs-304541 kubelet[3836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:57:21 embed-certs-304541 kubelet[3836]: E1128 00:57:21.182926    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 00:57:32 embed-certs-304541 kubelet[3836]: E1128 00:57:32.182009    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	
	* 
	* ==> storage-provisioner [e3fc8bf06b33b6bd3855dc00fec4b678dda94fa211e2dd4538bc17ab34dbf4a1] <==
	* I1128 00:48:36.693977       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:48:36.708872       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:48:36.708940       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 00:48:36.723749       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 00:48:36.723968       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-304541_322210ba-84b6-48ab-aefc-b0ff548de6df!
	I1128 00:48:36.725214       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"469eb09c-dd9e-49b7-864d-91bb452a3562", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-304541_322210ba-84b6-48ab-aefc-b0ff548de6df became leader
	I1128 00:48:36.824644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-304541_322210ba-84b6-48ab-aefc-b0ff548de6df!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304541 -n embed-certs-304541
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-304541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xzz2t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-304541 describe pod metrics-server-57f55c9bc5-xzz2t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-304541 describe pod metrics-server-57f55c9bc5-xzz2t: exit status 1 (69.818129ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xzz2t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-304541 describe pod metrics-server-57f55c9bc5-xzz2t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-28 00:57:51.582880947 +0000 UTC m=+5573.147907572
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-488423 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-488423 logs -n 25: (1.60251687s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-188325                                 | cert-options-188325          | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:33 UTC |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-732472        | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-304541            | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-001086 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | disable-driver-mounts-001086                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:37 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473615             | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC | 28 Nov 23 00:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-732472             | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-488423  | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-304541                 | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473615                  | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-488423       | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC | 28 Nov 23 00:48 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 00:40:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 00:40:42.238362   46126 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:40:42.238498   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238513   46126 out.go:309] Setting ErrFile to fd 2...
	I1128 00:40:42.238520   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238712   46126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:40:42.239236   46126 out.go:303] Setting JSON to false
	I1128 00:40:42.240138   46126 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4989,"bootTime":1701127053,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:40:42.240194   46126 start.go:138] virtualization: kvm guest
	I1128 00:40:42.242505   46126 out.go:177] * [default-k8s-diff-port-488423] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:40:42.243937   46126 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:40:42.243990   46126 notify.go:220] Checking for updates...
	I1128 00:40:42.245317   46126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:40:42.246717   46126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:40:42.248096   46126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:40:42.249294   46126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:40:42.250596   46126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:40:42.252296   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:40:42.252793   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.252854   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.267605   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I1128 00:40:42.267958   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.268457   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.268479   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.268774   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.268971   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.269215   46126 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:40:42.269470   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.269501   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.283984   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I1128 00:40:42.284338   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.284786   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.284808   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.285077   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.285263   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.319077   46126 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 00:40:42.320321   46126 start.go:298] selected driver: kvm2
	I1128 00:40:42.320332   46126 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.320481   46126 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:40:42.321242   46126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.321325   46126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 00:40:42.335477   46126 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 00:40:42.335818   46126 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 00:40:42.335887   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:40:42.335907   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:40:42.335922   46126 start_flags.go:323] config:
	{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-48842
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.336092   46126 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.337823   46126 out.go:177] * Starting control plane node default-k8s-diff-port-488423 in cluster default-k8s-diff-port-488423
	I1128 00:40:40.713025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:42.338980   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:40:42.339010   46126 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 00:40:42.339024   46126 cache.go:56] Caching tarball of preloaded images
	I1128 00:40:42.339105   46126 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 00:40:42.339117   46126 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 00:40:42.339232   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:40:42.339416   46126 start.go:365] acquiring machines lock for default-k8s-diff-port-488423: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:40:43.785024   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:49.865013   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:52.936964   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:59.017058   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:02.089017   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:08.169021   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:11.241040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:17.321032   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:20.393000   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:26.473039   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:29.544989   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:35.625074   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:38.697020   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:44.777040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:47.849040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:53.929055   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:57.001005   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:03.081016   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:06.153078   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:12.233029   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:15.305165   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:21.385067   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:24.457038   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:30.537025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:33.608998   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:39.689061   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:42.761012   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:48.841003   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:51.912985   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:54.916816   45580 start.go:369] acquired machines lock for "embed-certs-304541" in 3m47.030120592s
	I1128 00:42:54.916877   45580 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:42:54.916890   45580 fix.go:54] fixHost starting: 
	I1128 00:42:54.917233   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:42:54.917266   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:42:54.932296   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1128 00:42:54.932712   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:42:54.933230   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:42:54.933254   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:42:54.933574   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:42:54.933837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:42:54.934006   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:42:54.935712   45580 fix.go:102] recreateIfNeeded on embed-certs-304541: state=Stopped err=<nil>
	I1128 00:42:54.935737   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	W1128 00:42:54.935937   45580 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:42:54.937893   45580 out.go:177] * Restarting existing kvm2 VM for "embed-certs-304541" ...
	I1128 00:42:54.914751   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:42:54.914794   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:42:54.916666   45269 machine.go:91] provisioned docker machine in 4m37.413850055s
	I1128 00:42:54.916713   45269 fix.go:56] fixHost completed within 4m37.433506318s
	I1128 00:42:54.916719   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 4m37.433526985s
	W1128 00:42:54.916738   45269 start.go:691] error starting host: provision: host is not running
	W1128 00:42:54.916844   45269 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 00:42:54.916854   45269 start.go:706] Will try again in 5 seconds ...
	I1128 00:42:54.939120   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Start
	I1128 00:42:54.939284   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring networks are active...
	I1128 00:42:54.940122   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network default is active
	I1128 00:42:54.940636   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network mk-embed-certs-304541 is active
	I1128 00:42:54.941025   45580 main.go:141] libmachine: (embed-certs-304541) Getting domain xml...
	I1128 00:42:54.941883   45580 main.go:141] libmachine: (embed-certs-304541) Creating domain...
	I1128 00:42:56.157644   45580 main.go:141] libmachine: (embed-certs-304541) Waiting to get IP...
	I1128 00:42:56.158479   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.158803   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.158888   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.158791   46474 retry.go:31] will retry after 235.266272ms: waiting for machine to come up
	I1128 00:42:56.395238   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.395630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.395664   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.395579   46474 retry.go:31] will retry after 352.110542ms: waiting for machine to come up
	I1128 00:42:56.749150   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.749542   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.749570   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.749500   46474 retry.go:31] will retry after 364.122623ms: waiting for machine to come up
	I1128 00:42:57.115054   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.115497   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.115526   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.115450   46474 retry.go:31] will retry after 583.197763ms: waiting for machine to come up
	I1128 00:42:57.700134   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.700551   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.700577   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.700497   46474 retry.go:31] will retry after 515.615548ms: waiting for machine to come up
	I1128 00:42:59.917964   45269 start.go:365] acquiring machines lock for old-k8s-version-732472: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:42:58.218252   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.218630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.218668   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.218611   46474 retry.go:31] will retry after 690.258077ms: waiting for machine to come up
	I1128 00:42:58.910090   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.910438   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.910464   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.910413   46474 retry.go:31] will retry after 737.779074ms: waiting for machine to come up
	I1128 00:42:59.649308   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:59.649634   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:59.649661   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:59.649609   46474 retry.go:31] will retry after 1.23938471s: waiting for machine to come up
	I1128 00:43:00.890867   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:00.891318   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:00.891356   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:00.891298   46474 retry.go:31] will retry after 1.475598535s: waiting for machine to come up
	I1128 00:43:02.368630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:02.369159   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:02.369189   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:02.369085   46474 retry.go:31] will retry after 2.323321s: waiting for machine to come up
	I1128 00:43:04.695735   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:04.696175   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:04.696208   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:04.696131   46474 retry.go:31] will retry after 1.903335453s: waiting for machine to come up
	I1128 00:43:06.601229   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:06.601657   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:06.601687   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:06.601612   46474 retry.go:31] will retry after 2.205948796s: waiting for machine to come up
	I1128 00:43:08.809792   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:08.810161   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:08.810188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:08.810149   46474 retry.go:31] will retry after 3.31430253s: waiting for machine to come up
	I1128 00:43:12.126852   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:12.127294   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:12.127323   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:12.127249   46474 retry.go:31] will retry after 3.492216742s: waiting for machine to come up
	I1128 00:43:16.981905   45815 start.go:369] acquired machines lock for "no-preload-473615" in 3m38.128436656s
	I1128 00:43:16.981988   45815 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:16.982000   45815 fix.go:54] fixHost starting: 
	I1128 00:43:16.982400   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:16.982434   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:17.001935   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I1128 00:43:17.002390   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:17.002899   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:43:17.002930   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:17.003303   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:17.003515   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:17.003658   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:43:17.005243   45815 fix.go:102] recreateIfNeeded on no-preload-473615: state=Stopped err=<nil>
	I1128 00:43:17.005273   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	W1128 00:43:17.005442   45815 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:17.007831   45815 out.go:177] * Restarting existing kvm2 VM for "no-preload-473615" ...
	I1128 00:43:15.620590   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621046   45580 main.go:141] libmachine: (embed-certs-304541) Found IP for machine: 192.168.50.93
	I1128 00:43:15.621071   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has current primary IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621083   45580 main.go:141] libmachine: (embed-certs-304541) Reserving static IP address...
	I1128 00:43:15.621440   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.621473   45580 main.go:141] libmachine: (embed-certs-304541) DBG | skip adding static IP to network mk-embed-certs-304541 - found existing host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"}
	I1128 00:43:15.621484   45580 main.go:141] libmachine: (embed-certs-304541) Reserved static IP address: 192.168.50.93
	I1128 00:43:15.621498   45580 main.go:141] libmachine: (embed-certs-304541) Waiting for SSH to be available...
	I1128 00:43:15.621516   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Getting to WaitForSSH function...
	I1128 00:43:15.623594   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623865   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.623897   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623968   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH client type: external
	I1128 00:43:15.623989   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa (-rw-------)
	I1128 00:43:15.624044   45580 main.go:141] libmachine: (embed-certs-304541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:15.624057   45580 main.go:141] libmachine: (embed-certs-304541) DBG | About to run SSH command:
	I1128 00:43:15.624068   45580 main.go:141] libmachine: (embed-certs-304541) DBG | exit 0
	I1128 00:43:15.708868   45580 main.go:141] libmachine: (embed-certs-304541) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:15.709246   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetConfigRaw
	I1128 00:43:15.709989   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.712312   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712623   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.712660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712968   45580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/config.json ...
	I1128 00:43:15.713166   45580 machine.go:88] provisioning docker machine ...
	I1128 00:43:15.713183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:15.713360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713552   45580 buildroot.go:166] provisioning hostname "embed-certs-304541"
	I1128 00:43:15.713573   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713731   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.716027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716386   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.716419   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716530   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.716703   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.716856   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.717034   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.717229   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.717565   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.717579   45580 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-304541 && echo "embed-certs-304541" | sudo tee /etc/hostname
	I1128 00:43:15.841766   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-304541
	
	I1128 00:43:15.841821   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.844529   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.844872   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.844919   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.845037   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.845231   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845476   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.845616   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.845976   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.846002   45580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-304541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-304541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-304541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:15.965821   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:15.965855   45580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:15.965876   45580 buildroot.go:174] setting up certificates
	I1128 00:43:15.965890   45580 provision.go:83] configureAuth start
	I1128 00:43:15.965903   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.966183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.968916   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969285   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.969313   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969483   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.971549   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.971913   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.971949   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.972092   45580 provision.go:138] copyHostCerts
	I1128 00:43:15.972168   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:15.972182   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:15.972260   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:15.972415   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:15.972427   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:15.972472   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:15.972562   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:15.972572   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:15.972603   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:15.972663   45580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.embed-certs-304541 san=[192.168.50.93 192.168.50.93 localhost 127.0.0.1 minikube embed-certs-304541]
	I1128 00:43:16.272269   45580 provision.go:172] copyRemoteCerts
	I1128 00:43:16.272333   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:16.272354   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.274793   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275102   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.275138   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275285   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.275495   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.275628   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.275752   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.361853   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:43:16.386340   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:16.410490   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:16.433471   45580 provision.go:86] duration metric: configureAuth took 467.56808ms
	I1128 00:43:16.433505   45580 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:16.433686   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:16.433760   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.436514   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.436987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.437029   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.437129   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.437316   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437472   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437614   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.437748   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.438055   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.438072   45580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:16.732374   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:16.732407   45580 machine.go:91] provisioned docker machine in 1.019227514s
	I1128 00:43:16.732419   45580 start.go:300] post-start starting for "embed-certs-304541" (driver="kvm2")
	I1128 00:43:16.732429   45580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:16.732474   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.732847   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:16.732879   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.735564   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.735987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.736027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.736210   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.736393   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.736549   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.736714   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.824741   45580 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:16.829313   45580 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:16.829347   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:16.829426   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:16.829529   45580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:16.829642   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:16.839740   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:16.862881   45580 start.go:303] post-start completed in 130.432418ms
	I1128 00:43:16.862911   45580 fix.go:56] fixHost completed within 21.946020541s
	I1128 00:43:16.862938   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.865721   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.866144   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866336   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.866545   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866744   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866869   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.867046   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.867350   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.867359   45580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:16.981759   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132196.930241591
	
	I1128 00:43:16.981779   45580 fix.go:206] guest clock: 1701132196.930241591
	I1128 00:43:16.981786   45580 fix.go:219] Guest: 2023-11-28 00:43:16.930241591 +0000 UTC Remote: 2023-11-28 00:43:16.862915941 +0000 UTC m=+249.133993071 (delta=67.32565ms)
	I1128 00:43:16.981804   45580 fix.go:190] guest clock delta is within tolerance: 67.32565ms
	I1128 00:43:16.981809   45580 start.go:83] releasing machines lock for "embed-certs-304541", held for 22.064954687s
	I1128 00:43:16.981848   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.982121   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:16.984621   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.984927   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.984986   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.985171   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985675   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985825   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985892   45580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:16.985926   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.986025   45580 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:16.986054   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.988651   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.988839   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989079   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989367   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989411   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989451   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989491   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989544   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989648   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989692   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989781   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989860   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.989933   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:17.104567   45580 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:17.110844   45580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:17.254201   45580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:17.262078   45580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:17.262154   45580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:17.282179   45580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:17.282209   45580 start.go:472] detecting cgroup driver to use...
	I1128 00:43:17.282271   45580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:17.296891   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:17.311479   45580 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:17.311552   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:17.325946   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:17.340513   45580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:17.469601   45580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:17.605127   45580 docker.go:219] disabling docker service ...
	I1128 00:43:17.605199   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:17.621850   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:17.634608   45580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:17.753009   45580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:17.859260   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:17.872564   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:17.889701   45580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:17.889755   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.898724   45580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:17.898799   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.907565   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.916243   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.925280   45580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:17.934933   45580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:17.943902   45580 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:17.943960   45580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:17.957608   45580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:17.967379   45580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:18.074173   45580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:18.251191   45580 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:18.251264   45580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:18.259963   45580 start.go:540] Will wait 60s for crictl version
	I1128 00:43:18.260041   45580 ssh_runner.go:195] Run: which crictl
	I1128 00:43:18.263936   45580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:18.303087   45580 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:18.303181   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.344939   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.402444   45580 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:17.009281   45815 main.go:141] libmachine: (no-preload-473615) Calling .Start
	I1128 00:43:17.009442   45815 main.go:141] libmachine: (no-preload-473615) Ensuring networks are active...
	I1128 00:43:17.010161   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network default is active
	I1128 00:43:17.010485   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network mk-no-preload-473615 is active
	I1128 00:43:17.010860   45815 main.go:141] libmachine: (no-preload-473615) Getting domain xml...
	I1128 00:43:17.011780   45815 main.go:141] libmachine: (no-preload-473615) Creating domain...
	I1128 00:43:18.289916   45815 main.go:141] libmachine: (no-preload-473615) Waiting to get IP...
	I1128 00:43:18.290892   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.291348   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.291434   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.291321   46604 retry.go:31] will retry after 208.579367ms: waiting for machine to come up
	I1128 00:43:18.501947   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.502401   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.502431   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.502362   46604 retry.go:31] will retry after 296.427399ms: waiting for machine to come up
	I1128 00:43:18.403974   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:18.406811   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407171   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:18.407201   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407459   45580 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:18.411727   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:18.423460   45580 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:18.423570   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:18.463722   45580 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:18.463797   45580 ssh_runner.go:195] Run: which lz4
	I1128 00:43:18.468008   45580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:18.472523   45580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:18.472560   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:43:20.378745   45580 crio.go:444] Took 1.910818 seconds to copy over tarball
	I1128 00:43:20.378836   45580 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:18.801131   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.801707   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.801741   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.801666   46604 retry.go:31] will retry after 355.365314ms: waiting for machine to come up
	I1128 00:43:19.159088   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.159590   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.159628   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.159550   46604 retry.go:31] will retry after 584.908889ms: waiting for machine to come up
	I1128 00:43:19.746379   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.746941   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.746978   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.746901   46604 retry.go:31] will retry after 707.432097ms: waiting for machine to come up
	I1128 00:43:20.455880   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:20.456378   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:20.456402   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:20.456346   46604 retry.go:31] will retry after 598.57984ms: waiting for machine to come up
	I1128 00:43:21.056102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.056548   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.056579   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.056500   46604 retry.go:31] will retry after 742.55033ms: waiting for machine to come up
	I1128 00:43:21.800382   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.800895   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.800926   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.800841   46604 retry.go:31] will retry after 1.138217867s: waiting for machine to come up
	I1128 00:43:22.941401   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:22.941902   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:22.941932   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:22.941867   46604 retry.go:31] will retry after 1.552423219s: waiting for machine to come up
	I1128 00:43:23.310969   45580 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.932089296s)
	I1128 00:43:23.311004   45580 crio.go:451] Took 2.932228 seconds to extract the tarball
	I1128 00:43:23.311017   45580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:43:23.351844   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:23.397599   45580 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:43:23.397625   45580 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:43:23.397705   45580 ssh_runner.go:195] Run: crio config
	I1128 00:43:23.460298   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:23.460326   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:23.460348   45580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:23.460383   45580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.93 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-304541 NodeName:embed-certs-304541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:23.460547   45580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-304541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:23.460641   45580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-304541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:23.460696   45580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:43:23.470334   45580 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:23.470410   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:23.480675   45580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1128 00:43:23.497482   45580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:43:23.513709   45580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1128 00:43:23.530363   45580 ssh_runner.go:195] Run: grep 192.168.50.93	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:23.533938   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:23.546399   45580 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541 for IP: 192.168.50.93
	I1128 00:43:23.546443   45580 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:23.546632   45580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:23.546695   45580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:23.546799   45580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/client.key
	I1128 00:43:23.546892   45580 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key.9bda4d83
	I1128 00:43:23.546960   45580 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key
	I1128 00:43:23.547122   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:23.547178   45580 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:23.547196   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:23.547237   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:23.547280   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:23.547317   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:23.547392   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:23.548287   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:23.571910   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 00:43:23.597339   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:23.621977   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:43:23.648048   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:23.671213   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:23.695307   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:23.719122   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:23.743153   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:23.766469   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:23.789932   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:23.813950   45580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:23.830291   45580 ssh_runner.go:195] Run: openssl version
	I1128 00:43:23.837945   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:23.847572   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852284   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852334   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.860003   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:23.872829   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:23.886286   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.892997   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.893079   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.899923   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:23.909771   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:23.919498   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924066   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924126   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.929583   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:23.939366   45580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:23.944091   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:23.950255   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:23.956493   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:23.962278   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:23.970032   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:23.977660   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:23.984257   45580 kubeadm.go:404] StartCluster: {Name:embed-certs-304541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:23.984408   45580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:23.984471   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:24.026147   45580 cri.go:89] found id: ""
	I1128 00:43:24.026222   45580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:24.035520   45580 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:24.035550   45580 kubeadm.go:636] restartCluster start
	I1128 00:43:24.035631   45580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:24.044318   45580 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.045591   45580 kubeconfig.go:92] found "embed-certs-304541" server: "https://192.168.50.93:8443"
	I1128 00:43:24.047987   45580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:24.056482   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.056541   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.067055   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.067072   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.067108   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.076950   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.577344   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.577441   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.588707   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.077862   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.077965   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.089729   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.577938   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.578019   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.593191   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.077819   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.077891   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.091224   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.577757   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.577844   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.588769   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.077106   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.077235   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.088668   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.577169   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.577249   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.588221   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.496599   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:24.496989   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:24.497018   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:24.496943   46604 retry.go:31] will retry after 2.05343917s: waiting for machine to come up
	I1128 00:43:26.552249   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:26.552684   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:26.552716   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:26.552636   46604 retry.go:31] will retry after 2.338063311s: waiting for machine to come up
	I1128 00:43:28.077161   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.077265   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.088552   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.577077   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.577168   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.588335   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.077927   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.078027   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.089679   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.577193   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.577293   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.077430   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.077542   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.088547   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.577088   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.577203   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.077809   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.077907   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.090329   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.577897   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.577975   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.591561   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.077101   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.077206   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.087945   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.577446   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.577528   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.588542   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.893450   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:28.893812   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:28.893841   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:28.893761   46604 retry.go:31] will retry after 3.578756905s: waiting for machine to come up
	I1128 00:43:32.473719   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:32.474199   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:32.474234   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:32.474155   46604 retry.go:31] will retry after 3.070637163s: waiting for machine to come up
	I1128 00:43:36.805769   46126 start.go:369] acquired machines lock for "default-k8s-diff-port-488423" in 2m54.466321295s
	I1128 00:43:36.805830   46126 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:36.805840   46126 fix.go:54] fixHost starting: 
	I1128 00:43:36.806271   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:36.806311   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:36.825195   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I1128 00:43:36.825723   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:36.826325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:43:36.826348   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:36.826703   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:36.826932   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:36.827106   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:43:36.828683   46126 fix.go:102] recreateIfNeeded on default-k8s-diff-port-488423: state=Stopped err=<nil>
	I1128 00:43:36.828709   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	W1128 00:43:36.828895   46126 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:36.830377   46126 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-488423" ...
	I1128 00:43:36.831614   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Start
	I1128 00:43:36.831781   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring networks are active...
	I1128 00:43:36.832447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network default is active
	I1128 00:43:36.832841   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network mk-default-k8s-diff-port-488423 is active
	I1128 00:43:36.833220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Getting domain xml...
	I1128 00:43:36.833947   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Creating domain...
	I1128 00:43:33.077031   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.077109   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.088430   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:33.578007   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.578093   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.589185   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:34.056684   45580 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:43:34.056718   45580 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:43:34.056733   45580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:43:34.056836   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:34.096078   45580 cri.go:89] found id: ""
	I1128 00:43:34.096157   45580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:43:34.111200   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:43:34.119603   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:43:34.119654   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128150   45580 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128170   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.236389   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.879134   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.070594   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.159436   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.223694   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:43:35.223787   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.238511   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.753955   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.254449   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.753943   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.253987   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.753515   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.777619   45580 api_server.go:72] duration metric: took 2.553922938s to wait for apiserver process to appear ...
	I1128 00:43:37.777646   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:43:35.548294   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.548718   45815 main.go:141] libmachine: (no-preload-473615) Found IP for machine: 192.168.61.195
	I1128 00:43:35.548746   45815 main.go:141] libmachine: (no-preload-473615) Reserving static IP address...
	I1128 00:43:35.548790   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has current primary IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.549194   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.549223   45815 main.go:141] libmachine: (no-preload-473615) DBG | skip adding static IP to network mk-no-preload-473615 - found existing host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"}
	I1128 00:43:35.549238   45815 main.go:141] libmachine: (no-preload-473615) Reserved static IP address: 192.168.61.195
	I1128 00:43:35.549253   45815 main.go:141] libmachine: (no-preload-473615) Waiting for SSH to be available...
	I1128 00:43:35.549265   45815 main.go:141] libmachine: (no-preload-473615) DBG | Getting to WaitForSSH function...
	I1128 00:43:35.551246   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551573   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.551601   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551757   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH client type: external
	I1128 00:43:35.551778   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa (-rw-------)
	I1128 00:43:35.551811   45815 main.go:141] libmachine: (no-preload-473615) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:35.551831   45815 main.go:141] libmachine: (no-preload-473615) DBG | About to run SSH command:
	I1128 00:43:35.551867   45815 main.go:141] libmachine: (no-preload-473615) DBG | exit 0
	I1128 00:43:35.636291   45815 main.go:141] libmachine: (no-preload-473615) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:35.636667   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetConfigRaw
	I1128 00:43:35.637278   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.639799   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640164   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.640209   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640423   45815 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/config.json ...
	I1128 00:43:35.640598   45815 machine.go:88] provisioning docker machine ...
	I1128 00:43:35.640632   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:35.640853   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641071   45815 buildroot.go:166] provisioning hostname "no-preload-473615"
	I1128 00:43:35.641090   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641242   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.643554   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643845   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.643905   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643977   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.644140   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644370   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.644540   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.644971   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.644986   45815 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473615 && echo "no-preload-473615" | sudo tee /etc/hostname
	I1128 00:43:35.766635   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473615
	
	I1128 00:43:35.766689   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.769704   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770068   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.770108   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.770463   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770622   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770733   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.770849   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.771214   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.771235   45815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473615/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:35.889378   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:35.889416   45815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:35.889480   45815 buildroot.go:174] setting up certificates
	I1128 00:43:35.889494   45815 provision.go:83] configureAuth start
	I1128 00:43:35.889506   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.889810   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.892924   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893313   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.893359   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.895759   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896140   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.896169   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896281   45815 provision.go:138] copyHostCerts
	I1128 00:43:35.896345   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:35.896370   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:35.896448   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:35.896565   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:35.896577   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:35.896618   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:35.896713   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:35.896728   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:35.896778   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:35.896856   45815 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.no-preload-473615 san=[192.168.61.195 192.168.61.195 localhost 127.0.0.1 minikube no-preload-473615]
	I1128 00:43:36.080367   45815 provision.go:172] copyRemoteCerts
	I1128 00:43:36.080429   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:36.080451   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.082989   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083327   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.083358   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083529   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.083745   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.083927   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.084077   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.166338   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:36.191867   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:36.214184   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:36.237102   45815 provision.go:86] duration metric: configureAuth took 347.594627ms
	I1128 00:43:36.237135   45815 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:36.237338   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:43:36.237421   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.240408   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240787   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.240826   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240995   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.241193   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241368   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241539   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.241712   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.242000   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.242016   45815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:36.565582   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:36.565609   45815 machine.go:91] provisioned docker machine in 924.985826ms
	I1128 00:43:36.565623   45815 start.go:300] post-start starting for "no-preload-473615" (driver="kvm2")
	I1128 00:43:36.565649   45815 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:36.565677   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.565994   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:36.566025   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.568653   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569032   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.569064   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569148   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.569337   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.569502   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.569678   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.655695   45815 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:36.659909   45815 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:36.659941   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:36.660020   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:36.660108   45815 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:36.660228   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:36.669575   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:36.690970   45815 start.go:303] post-start completed in 125.33198ms
	I1128 00:43:36.690998   45815 fix.go:56] fixHost completed within 19.708998537s
	I1128 00:43:36.691022   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.693929   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694361   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.694400   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694665   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.694877   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695064   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695237   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.695404   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.695738   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.695750   45815 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:36.805602   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132216.779589412
	
	I1128 00:43:36.805626   45815 fix.go:206] guest clock: 1701132216.779589412
	I1128 00:43:36.805637   45815 fix.go:219] Guest: 2023-11-28 00:43:36.779589412 +0000 UTC Remote: 2023-11-28 00:43:36.691003095 +0000 UTC m=+237.986754258 (delta=88.586317ms)
	I1128 00:43:36.805673   45815 fix.go:190] guest clock delta is within tolerance: 88.586317ms
	I1128 00:43:36.805678   45815 start.go:83] releasing machines lock for "no-preload-473615", held for 19.823720426s
	I1128 00:43:36.805705   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.805989   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:36.808864   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809316   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.809346   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809529   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810162   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810361   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810441   45815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:36.810494   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.810824   45815 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:36.810845   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.813747   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.813979   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814064   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814263   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814444   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814471   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814508   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814659   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814764   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.814844   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814913   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.815484   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.815640   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.923054   45815 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:36.930078   45815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:37.082251   45815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:37.088817   45815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:37.088890   45815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:37.110921   45815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:37.110950   45815 start.go:472] detecting cgroup driver to use...
	I1128 00:43:37.111017   45815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:37.128450   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:37.144814   45815 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:37.144875   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:37.158185   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:37.170311   45815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:37.287910   45815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:37.414142   45815 docker.go:219] disabling docker service ...
	I1128 00:43:37.414222   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:37.427085   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:37.438631   45815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:37.559028   45815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:37.676646   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:37.689214   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:37.709298   45815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:37.709370   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.718368   45815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:37.718446   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.727188   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.736230   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.745594   45815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:37.755149   45815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:37.763179   45815 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:37.763237   45815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:37.780091   45815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:37.790861   45815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:37.923396   45815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:38.133933   45815 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:38.134013   45815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:38.143538   45815 start.go:540] Will wait 60s for crictl version
	I1128 00:43:38.143598   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.149212   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:38.205988   45815 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:38.206079   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.261211   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.315398   45815 out.go:177] * Preparing Kubernetes v1.29.0-rc.0 on CRI-O 1.24.1 ...
	I1128 00:43:38.317052   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:38.320262   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320708   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:38.320736   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320976   45815 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:38.325437   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:38.337411   45815 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 00:43:38.337457   45815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:38.384218   45815 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.0". assuming images are not preloaded.
	I1128 00:43:38.384245   45815 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.0 registry.k8s.io/kube-controller-manager:v1.29.0-rc.0 registry.k8s.io/kube-scheduler:v1.29.0-rc.0 registry.k8s.io/kube-proxy:v1.29.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:43:38.384325   45815 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.384533   45815 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.384553   45815 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1128 00:43:38.384634   45815 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.384726   45815 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.384817   45815 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.384870   45815 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.384931   45815 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.386318   45815 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.386368   45815 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1128 00:43:38.386381   45815 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.386373   45815 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.386324   45815 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.386316   45815 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.386319   45815 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.386326   45815 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.526945   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.527246   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1128 00:43:38.538042   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.538097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.539522   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.549538   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.557097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.621381   45815 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" does not exist at hash "4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9" in container runtime
	I1128 00:43:38.621440   45815 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.621516   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.208059   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting to get IP...
	I1128 00:43:38.209168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209599   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.209572   46749 retry.go:31] will retry after 256.562292ms: waiting for machine to come up
	I1128 00:43:38.468199   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468798   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468828   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.468722   46749 retry.go:31] will retry after 287.91937ms: waiting for machine to come up
	I1128 00:43:38.758157   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758610   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758640   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.758555   46749 retry.go:31] will retry after 377.696379ms: waiting for machine to come up
	I1128 00:43:39.138269   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138761   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138795   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.138706   46749 retry.go:31] will retry after 476.093256ms: waiting for machine to come up
	I1128 00:43:39.616256   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616611   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616638   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.616577   46749 retry.go:31] will retry after 628.654941ms: waiting for machine to come up
	I1128 00:43:40.246993   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247498   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247543   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.247455   46749 retry.go:31] will retry after 607.981973ms: waiting for machine to come up
	I1128 00:43:40.857220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857634   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857663   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.857592   46749 retry.go:31] will retry after 866.108704ms: waiting for machine to come up
	I1128 00:43:41.725140   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725695   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725716   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:41.725609   46749 retry.go:31] will retry after 1.158669064s: waiting for machine to come up
	I1128 00:43:37.777663   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.028441   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.028478   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.028492   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.043818   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.043846   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.544532   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.551469   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:42.551505   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.044055   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.050233   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:43.050262   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.544857   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.550155   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:43:43.558929   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:43:43.558962   45580 api_server.go:131] duration metric: took 5.781308354s to wait for apiserver health ...
	I1128 00:43:43.558974   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:43.558984   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:43.560872   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:43:38.775724   45815 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1128 00:43:38.775776   45815 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.775827   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.775953   45815 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1128 00:43:38.776035   45815 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" does not exist at hash "e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7" in container runtime
	I1128 00:43:38.776059   45815 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.776106   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776188   45815 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" does not exist at hash "e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4" in container runtime
	I1128 00:43:38.776220   45815 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.776247   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776315   45815 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.0" does not exist at hash "df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55" in container runtime
	I1128 00:43:38.776335   45815 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.776360   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776456   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.776562   45815 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.776601   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.792457   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.792533   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.792584   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.792634   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.792714   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.929517   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.929640   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.941438   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941544   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941623   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.941704   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.964773   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1128 00:43:38.964890   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:38.964980   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965038   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965118   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1128 00:43:38.965175   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:38.965250   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0 (exists)
	I1128 00:43:38.965262   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.965291   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.970386   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1128 00:43:38.970443   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0 (exists)
	I1128 00:43:38.970458   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0 (exists)
	I1128 00:43:38.974722   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1128 00:43:38.974970   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0 (exists)
	I1128 00:43:39.286976   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143462   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0: (2.178138495s)
	I1128 00:43:41.143491   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0 from cache
	I1128 00:43:41.143520   45815 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143536   45815 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.856517641s)
	I1128 00:43:41.143563   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143596   45815 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1128 00:43:41.143630   45815 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143678   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:43.335836   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.192246706s)
	I1128 00:43:43.335894   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1128 00:43:43.335859   45815 ssh_runner.go:235] Completed: which crictl: (2.192168329s)
	I1128 00:43:43.335938   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335970   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335971   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:42.886014   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886540   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886564   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:42.886457   46749 retry.go:31] will retry after 1.698662705s: waiting for machine to come up
	I1128 00:43:44.586452   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586892   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586917   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:44.586848   46749 retry.go:31] will retry after 1.681392058s: waiting for machine to come up
	I1128 00:43:46.270022   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270545   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270578   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:46.270491   46749 retry.go:31] will retry after 2.061464637s: waiting for machine to come up
	I1128 00:43:43.562274   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:43:43.583729   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:43:43.614704   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:43:43.627543   45580 system_pods.go:59] 8 kube-system pods found
	I1128 00:43:43.627587   45580 system_pods.go:61] "coredns-5dd5756b68-crmfq" [e412b41a-a4a4-4c8c-8fe9-b96c52e5815c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:43:43.627602   45580 system_pods.go:61] "etcd-embed-certs-304541" [ceeea55a-ffbb-4c18-b563-3552f8d47f3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:43:43.627622   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [e7bd6f60-fe90-4413-b906-0101ad3bda9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:43:43.627632   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [e083fd78-3aad-44ed-8bac-fc72eeded7f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:43:43.627652   45580 system_pods.go:61] "kube-proxy-6d4rt" [bc801fd6-e726-41d3-afcf-5ed86723dca9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:43:43.627665   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [df10b58f-43ec-4492-8d95-0d91ee88fec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:43:43.627676   45580 system_pods.go:61] "metrics-server-57f55c9bc5-sx4m7" [1618a041-6077-4076-8178-f2692dc983b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:43:43.627686   45580 system_pods.go:61] "storage-provisioner" [acaed13d-b10c-4fb6-b2b7-452cf928e1e5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:43:43.627696   45580 system_pods.go:74] duration metric: took 12.96707ms to wait for pod list to return data ...
	I1128 00:43:43.627709   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:43:43.632593   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:43:43.632628   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:43:43.632642   45580 node_conditions.go:105] duration metric: took 4.924217ms to run NodePressure ...
	I1128 00:43:43.632667   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:43.945692   45580 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950639   45580 kubeadm.go:787] kubelet initialised
	I1128 00:43:43.950666   45580 kubeadm.go:788] duration metric: took 4.940609ms waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950677   45580 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:43:43.956229   45580 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:45.975328   45580 pod_ready.go:102] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:46.036655   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0: (2.700640635s)
	I1128 00:43:46.036696   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0 from cache
	I1128 00:43:46.036721   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036786   45815 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.700708537s)
	I1128 00:43:46.036846   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1128 00:43:46.036792   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036943   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:48.418287   45815 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.381312759s)
	I1128 00:43:48.418326   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0: (2.381419374s)
	I1128 00:43:48.418339   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1128 00:43:48.418346   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0 from cache
	I1128 00:43:48.418370   45815 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.418426   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.333973   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334509   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:48.334432   46749 retry.go:31] will retry after 3.421790433s: waiting for machine to come up
	I1128 00:43:51.757991   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:51.758448   46749 retry.go:31] will retry after 3.726327818s: waiting for machine to come up
	I1128 00:43:48.484870   45580 pod_ready.go:92] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:48.484903   45580 pod_ready.go:81] duration metric: took 4.52864781s waiting for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:48.484916   45580 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006488   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.006516   45580 pod_ready.go:81] duration metric: took 521.591023ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006528   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014231   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.014258   45580 pod_ready.go:81] duration metric: took 7.721879ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014270   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:51.284611   45580 pod_ready.go:102] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:52.636848   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.218389263s)
	I1128 00:43:52.636883   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1128 00:43:52.636912   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:52.636964   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:56.745904   45269 start.go:369] acquired machines lock for "old-k8s-version-732472" in 56.827856444s
	I1128 00:43:56.745949   45269 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:56.745959   45269 fix.go:54] fixHost starting: 
	I1128 00:43:56.746379   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:56.746447   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:56.764386   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I1128 00:43:56.764907   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:56.765554   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:43:56.765584   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:56.766037   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:56.766221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:43:56.766365   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:43:56.768054   45269 fix.go:102] recreateIfNeeded on old-k8s-version-732472: state=Stopped err=<nil>
	I1128 00:43:56.768082   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	W1128 00:43:56.768219   45269 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:56.771618   45269 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-732472" ...
	I1128 00:43:55.486531   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487099   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Found IP for machine: 192.168.72.242
	I1128 00:43:55.487128   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserving static IP address...
	I1128 00:43:55.487158   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has current primary IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487539   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.487574   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | skip adding static IP to network mk-default-k8s-diff-port-488423 - found existing host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"}
	I1128 00:43:55.487595   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserved static IP address: 192.168.72.242
	I1128 00:43:55.487609   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for SSH to be available...
	I1128 00:43:55.487622   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Getting to WaitForSSH function...
	I1128 00:43:55.489858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490219   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.490253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490324   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH client type: external
	I1128 00:43:55.490373   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa (-rw-------)
	I1128 00:43:55.490414   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:55.490431   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | About to run SSH command:
	I1128 00:43:55.490447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | exit 0
	I1128 00:43:55.584551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:55.584987   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetConfigRaw
	I1128 00:43:55.585628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.588444   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.588889   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.588924   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.589207   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:43:55.589475   46126 machine.go:88] provisioning docker machine ...
	I1128 00:43:55.589501   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:55.589744   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590007   46126 buildroot.go:166] provisioning hostname "default-k8s-diff-port-488423"
	I1128 00:43:55.590031   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590203   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.592733   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593136   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.593170   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593313   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.593480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593756   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.593918   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.594316   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.594333   46126 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-488423 && echo "default-k8s-diff-port-488423" | sudo tee /etc/hostname
	I1128 00:43:55.739338   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-488423
	
	I1128 00:43:55.739368   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.742483   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.742870   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.742906   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.743009   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.743215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743365   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743512   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.743669   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.744119   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.744140   46126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-488423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-488423/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-488423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:55.883117   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:55.883146   46126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:55.883185   46126 buildroot.go:174] setting up certificates
	I1128 00:43:55.883198   46126 provision.go:83] configureAuth start
	I1128 00:43:55.883216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.883566   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.886292   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886625   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.886652   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886796   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.888873   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889213   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.889233   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889347   46126 provision.go:138] copyHostCerts
	I1128 00:43:55.889401   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:55.889413   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:55.889478   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:55.889611   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:55.889623   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:55.889650   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:55.889729   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:55.889738   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:55.889765   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:55.889848   46126 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-488423 san=[192.168.72.242 192.168.72.242 localhost 127.0.0.1 minikube default-k8s-diff-port-488423]
	I1128 00:43:55.945434   46126 provision.go:172] copyRemoteCerts
	I1128 00:43:55.945516   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:55.945547   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.948894   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949387   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.949422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949800   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.950023   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.950215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.950367   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.045647   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:56.069972   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1128 00:43:56.093947   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:56.118840   46126 provision.go:86] duration metric: configureAuth took 235.628083ms
	I1128 00:43:56.118867   46126 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:56.119072   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:56.119159   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.122135   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122514   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.122550   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122680   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.122884   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123076   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.123418   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.123729   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.123746   46126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:56.476330   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:56.476360   46126 machine.go:91] provisioned docker machine in 886.868182ms
	I1128 00:43:56.476384   46126 start.go:300] post-start starting for "default-k8s-diff-port-488423" (driver="kvm2")
	I1128 00:43:56.476399   46126 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:56.476422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.476787   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:56.476824   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.479803   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.480208   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480342   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.480542   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.480729   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.480901   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.574040   46126 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:56.578163   46126 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:56.578186   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:56.578247   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:56.578339   46126 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:56.578455   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:56.586455   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.613452   46126 start.go:303] post-start completed in 137.050871ms
	I1128 00:43:56.613484   46126 fix.go:56] fixHost completed within 19.807643021s
	I1128 00:43:56.613510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.616834   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.617253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.617686   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.617859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.618105   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.618302   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.618618   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.618630   46126 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:56.745691   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132236.690190729
	
	I1128 00:43:56.745711   46126 fix.go:206] guest clock: 1701132236.690190729
	I1128 00:43:56.745731   46126 fix.go:219] Guest: 2023-11-28 00:43:56.690190729 +0000 UTC Remote: 2023-11-28 00:43:56.613489194 +0000 UTC m=+194.421672716 (delta=76.701535ms)
	I1128 00:43:56.745784   46126 fix.go:190] guest clock delta is within tolerance: 76.701535ms
	I1128 00:43:56.745798   46126 start.go:83] releasing machines lock for "default-k8s-diff-port-488423", held for 19.939986738s
	I1128 00:43:56.745837   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.746091   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:56.749097   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749453   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.749486   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749648   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750192   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750392   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750446   46126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:56.750493   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.750661   46126 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:56.750685   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.753480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753655   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753948   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.753976   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754096   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754163   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.754191   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754241   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754327   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754474   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754489   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754621   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.754644   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754779   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.850794   46126 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:56.872044   46126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:57.016328   46126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:57.022389   46126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:57.022463   46126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:57.039925   46126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:57.039959   46126 start.go:472] detecting cgroup driver to use...
	I1128 00:43:57.040030   46126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:57.056385   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:57.068344   46126 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:57.068413   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:57.081752   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:57.095169   46126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:57.192392   46126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:56.772995   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Start
	I1128 00:43:56.773150   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring networks are active...
	I1128 00:43:56.774032   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network default is active
	I1128 00:43:56.774327   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network mk-old-k8s-version-732472 is active
	I1128 00:43:56.774732   45269 main.go:141] libmachine: (old-k8s-version-732472) Getting domain xml...
	I1128 00:43:56.775433   45269 main.go:141] libmachine: (old-k8s-version-732472) Creating domain...
	I1128 00:43:53.781169   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.781193   45580 pod_ready.go:81] duration metric: took 4.766915226s waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.781203   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789370   45580 pod_ready.go:92] pod "kube-proxy-6d4rt" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.789400   45580 pod_ready.go:81] duration metric: took 8.189391ms waiting for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789412   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794277   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.794299   45580 pod_ready.go:81] duration metric: took 4.87905ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794307   45580 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:55.984645   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:57.310000   46126 docker.go:219] disabling docker service ...
	I1128 00:43:57.310066   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:57.324484   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:57.339752   46126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:57.444051   46126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:57.557773   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:57.571662   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:57.591169   46126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:57.591230   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.605399   46126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:57.605462   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.617783   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.629258   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.639844   46126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:57.651810   46126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:57.663353   46126 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:57.663403   46126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:57.679095   46126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:57.688096   46126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:57.795868   46126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:57.970597   46126 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:57.970661   46126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:57.975830   46126 start.go:540] Will wait 60s for crictl version
	I1128 00:43:57.975900   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:43:57.980469   46126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:58.022819   46126 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:58.022932   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.078060   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.130219   46126 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:55.298307   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0: (2.661319898s)
	I1128 00:43:55.298330   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0 from cache
	I1128 00:43:55.298358   45815 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:55.298411   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:56.256987   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1128 00:43:56.257041   45815 cache_images.go:123] Successfully loaded all cached images
	I1128 00:43:56.257048   45815 cache_images.go:92] LoadImages completed in 17.872790347s
	I1128 00:43:56.257142   45815 ssh_runner.go:195] Run: crio config
	I1128 00:43:56.342206   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:43:56.342230   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:56.342248   45815 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:56.342265   45815 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.195 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473615 NodeName:no-preload-473615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:56.342421   45815 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473615"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:56.342519   45815 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:56.342581   45815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.0
	I1128 00:43:56.352200   45815 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:56.352275   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:56.360863   45815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1128 00:43:56.378620   45815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1128 00:43:56.396120   45815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1128 00:43:56.415090   45815 ssh_runner.go:195] Run: grep 192.168.61.195	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:56.419072   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:56.434497   45815 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615 for IP: 192.168.61.195
	I1128 00:43:56.434534   45815 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:56.434702   45815 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:56.434766   45815 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:56.434899   45815 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.key
	I1128 00:43:56.434990   45815 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key.6c770a2d
	I1128 00:43:56.435043   45815 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key
	I1128 00:43:56.435190   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:56.435231   45815 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:56.435249   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:56.435280   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:56.435317   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:56.435348   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:56.435402   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.436170   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:56.464712   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:43:56.492394   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:56.517331   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:43:56.540656   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:56.562997   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:56.587574   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:56.614358   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:56.640027   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:56.666632   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:56.690925   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:56.716816   45815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:56.734079   45815 ssh_runner.go:195] Run: openssl version
	I1128 00:43:56.739942   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:56.751230   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757607   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757662   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.764184   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:56.777196   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:56.788408   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793610   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793667   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.799203   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:56.809821   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:56.820489   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825268   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825335   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.830869   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:56.843707   45815 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:56.848717   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:56.855268   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:56.861889   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:56.867773   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:56.874642   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:56.882143   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:56.889812   45815 kubeadm.go:404] StartCluster: {Name:no-preload-473615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:56.889969   45815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:56.890021   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:56.931994   45815 cri.go:89] found id: ""
	I1128 00:43:56.932061   45815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:56.941996   45815 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:56.942014   45815 kubeadm.go:636] restartCluster start
	I1128 00:43:56.942074   45815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:56.950854   45815 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.951919   45815 kubeconfig.go:92] found "no-preload-473615" server: "https://192.168.61.195:8443"
	I1128 00:43:56.954777   45815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:56.963839   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.963902   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.974803   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.974821   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.974869   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.989023   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.489949   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.490022   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:57.501695   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.989930   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.990014   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.002435   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.489856   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.489946   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.506641   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.131523   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:58.134378   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.134826   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:58.134859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.135087   46126 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:58.139363   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:58.151488   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:58.151552   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:58.193551   46126 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:58.193618   46126 ssh_runner.go:195] Run: which lz4
	I1128 00:43:58.197624   46126 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:58.201842   46126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:58.201875   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:44:00.068140   46126 crio.go:444] Took 1.870561 seconds to copy over tarball
	I1128 00:44:00.068221   46126 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:58.122924   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting to get IP...
	I1128 00:43:58.123826   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.124165   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.124263   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.124146   46882 retry.go:31] will retry after 249.216665ms: waiting for machine to come up
	I1128 00:43:58.374969   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.375510   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.375537   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.375457   46882 retry.go:31] will retry after 317.223146ms: waiting for machine to come up
	I1128 00:43:58.694027   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.694483   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.694535   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.694443   46882 retry.go:31] will retry after 362.880377ms: waiting for machine to come up
	I1128 00:43:59.058976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.059623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.059650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.059571   46882 retry.go:31] will retry after 545.497342ms: waiting for machine to come up
	I1128 00:43:59.606962   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.607607   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.607633   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.607558   46882 retry.go:31] will retry after 678.467206ms: waiting for machine to come up
	I1128 00:44:00.287531   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:00.288062   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:00.288103   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:00.288054   46882 retry.go:31] will retry after 817.7633ms: waiting for machine to come up
	I1128 00:44:01.107179   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:01.107748   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:01.107776   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:01.107690   46882 retry.go:31] will retry after 1.02533736s: waiting for machine to come up
	I1128 00:44:02.134384   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:02.134940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:02.134972   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:02.134867   46882 retry.go:31] will retry after 1.291264059s: waiting for machine to come up
	I1128 00:43:58.491595   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:00.983179   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:58.989453   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.989568   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.006339   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.489912   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.490007   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.505297   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.989924   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.990020   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.004118   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.489346   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.489421   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.504026   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.989739   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.989828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.006279   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.489872   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.489975   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.504734   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.989185   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.989269   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.000313   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.489165   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.489246   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.505444   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.989956   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.990024   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.003038   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.489556   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.489663   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.502192   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.282407   46126 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.2141625s)
	I1128 00:44:03.282432   46126 crio.go:451] Took 3.214263 seconds to extract the tarball
	I1128 00:44:03.282440   46126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:03.324470   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:03.375858   46126 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:44:03.375881   46126 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:44:03.375944   46126 ssh_runner.go:195] Run: crio config
	I1128 00:44:03.440441   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:03.440462   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:03.440479   46126 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:03.440496   46126 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.242 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-488423 NodeName:default-k8s-diff-port-488423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:44:03.440670   46126 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.242
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-488423"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:03.440746   46126 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-488423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1128 00:44:03.440830   46126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:44:03.450060   46126 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:03.450138   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:03.458748   46126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1128 00:44:03.475315   46126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:03.492886   46126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1128 00:44:03.509665   46126 ssh_runner.go:195] Run: grep 192.168.72.242	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:03.513441   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:03.527336   46126 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423 for IP: 192.168.72.242
	I1128 00:44:03.527373   46126 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:03.527539   46126 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:03.527592   46126 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:03.527690   46126 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.key
	I1128 00:44:03.527770   46126 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key.05574f60
	I1128 00:44:03.527827   46126 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key
	I1128 00:44:03.527966   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:03.528009   46126 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:03.528024   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:03.528062   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:03.528098   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:03.528133   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:03.528188   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:03.528787   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:03.553210   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:03.578548   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:03.604661   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:03.627640   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:03.653147   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:03.681991   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:03.706068   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:03.730092   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:03.751326   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:03.776165   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:03.801844   46126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:03.819762   46126 ssh_runner.go:195] Run: openssl version
	I1128 00:44:03.826895   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:03.836806   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842921   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842983   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.848802   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:03.859065   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:03.869720   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874600   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874670   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.880712   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:03.891524   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:03.901286   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906102   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906163   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.911563   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:03.921606   46126 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:03.926553   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:03.932640   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:03.938482   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:03.944483   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:03.950430   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:03.956197   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:03.962543   46126 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:03.962647   46126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:03.962700   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:04.014418   46126 cri.go:89] found id: ""
	I1128 00:44:04.014499   46126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:04.024132   46126 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:04.024178   46126 kubeadm.go:636] restartCluster start
	I1128 00:44:04.024239   46126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:04.032856   46126 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.034010   46126 kubeconfig.go:92] found "default-k8s-diff-port-488423" server: "https://192.168.72.242:8444"
	I1128 00:44:04.036458   46126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:04.044461   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.044513   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.054697   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.054714   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.054759   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.066995   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.567687   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.567784   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.579528   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.067882   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.067970   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.082579   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.568116   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.568240   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.579606   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.067125   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.067229   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.078637   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.567159   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.567258   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.578623   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:07.067770   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.067864   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.081883   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.427919   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:03.428413   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:03.428442   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:03.428350   46882 retry.go:31] will retry after 1.150784696s: waiting for machine to come up
	I1128 00:44:04.580519   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:04.580976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:04.581008   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:04.580941   46882 retry.go:31] will retry after 1.981268381s: waiting for machine to come up
	I1128 00:44:06.564123   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:06.564623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:06.564641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:06.564596   46882 retry.go:31] will retry after 2.79895226s: waiting for machine to come up
	I1128 00:44:02.984445   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:05.483562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:03.989899   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.995828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.009197   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.489749   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.489829   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.501445   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.989934   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.990019   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.004077   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.489549   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.489634   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.501227   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.989858   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.989940   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.003151   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.489699   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.489785   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.502937   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.964667   45815 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:06.964705   45815 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:06.964720   45815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:06.964808   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:07.008487   45815 cri.go:89] found id: ""
	I1128 00:44:07.008572   45815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:07.028576   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:07.040057   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:07.040130   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050063   45815 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050085   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:07.199305   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.265283   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.065924411s)
	I1128 00:44:08.265324   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.468254   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.570027   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.650823   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:08.650900   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:08.667640   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:07.567667   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.567751   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.580778   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.067282   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.067368   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.080618   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.567146   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.567232   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.580324   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.067606   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.067728   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.083426   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.567987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.568084   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.579657   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.067205   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.067292   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.082466   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.568064   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.568159   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.583356   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.067987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.068114   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.084486   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.567945   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.568057   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.583108   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:12.068099   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.068186   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.079172   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.366118   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:09.366642   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:09.366677   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:09.366580   46882 retry.go:31] will retry after 2.538437833s: waiting for machine to come up
	I1128 00:44:11.906292   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:11.906799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:11.906823   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:11.906751   46882 retry.go:31] will retry after 4.351501946s: waiting for machine to come up
	I1128 00:44:07.983966   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.985333   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:12.483805   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.182449   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:09.681686   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.181905   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.681692   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.181652   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.209900   45815 api_server.go:72] duration metric: took 2.559073582s to wait for apiserver process to appear ...
	I1128 00:44:11.209935   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:11.209954   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.242230   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.242261   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.242276   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.285509   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.285538   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.786232   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.791529   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:15.791565   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.285909   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.290996   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:16.291040   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.786199   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.792488   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:44:16.805778   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:44:16.805807   45815 api_server.go:131] duration metric: took 5.595863517s to wait for apiserver health ...
	I1128 00:44:16.805817   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:44:16.805825   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:16.807924   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:12.567969   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.568085   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.579496   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.068092   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.068164   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.079081   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.567677   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.567773   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.579000   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:14.044782   46126 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:14.044818   46126 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:14.044832   46126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:14.044927   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:14.090411   46126 cri.go:89] found id: ""
	I1128 00:44:14.090487   46126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:14.106216   46126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:14.116309   46126 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:14.116367   46126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125060   46126 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125082   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.259194   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.923712   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.113501   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.221455   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.317171   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:15.317269   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.332625   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.847268   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.347347   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.847441   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.259741   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260326   45269 main.go:141] libmachine: (old-k8s-version-732472) Found IP for machine: 192.168.39.172
	I1128 00:44:16.260347   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserving static IP address...
	I1128 00:44:16.260368   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has current primary IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.260978   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | skip adding static IP to network mk-old-k8s-version-732472 - found existing host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"}
	I1128 00:44:16.261003   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Getting to WaitForSSH function...
	I1128 00:44:16.261021   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserved static IP address: 192.168.39.172
	I1128 00:44:16.261037   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting for SSH to be available...
	I1128 00:44:16.264000   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264370   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.264402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264496   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH client type: external
	I1128 00:44:16.264560   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa (-rw-------)
	I1128 00:44:16.264600   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:44:16.264624   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | About to run SSH command:
	I1128 00:44:16.264641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | exit 0
	I1128 00:44:16.373651   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | SSH cmd err, output: <nil>: 
	I1128 00:44:16.374185   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetConfigRaw
	I1128 00:44:16.374992   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.378530   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.378958   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.378987   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.379390   45269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/config.json ...
	I1128 00:44:16.379622   45269 machine.go:88] provisioning docker machine ...
	I1128 00:44:16.379646   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:16.379854   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380005   45269 buildroot.go:166] provisioning hostname "old-k8s-version-732472"
	I1128 00:44:16.380024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380152   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.382908   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383346   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.383376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383604   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.383824   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384179   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.384365   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.384875   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.384902   45269 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-732472 && echo "old-k8s-version-732472" | sudo tee /etc/hostname
	I1128 00:44:16.547302   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-732472
	
	I1128 00:44:16.547378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.550883   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551409   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.551448   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551634   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.551888   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552113   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552258   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.552465   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.552965   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.552994   45269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-732472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-732472/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-732472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:44:16.705539   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:44:16.705577   45269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:44:16.705601   45269 buildroot.go:174] setting up certificates
	I1128 00:44:16.705611   45269 provision.go:83] configureAuth start
	I1128 00:44:16.705622   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.705962   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.708726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709231   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.709283   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709531   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.712023   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712491   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.712524   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712658   45269 provision.go:138] copyHostCerts
	I1128 00:44:16.712720   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:44:16.712734   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:44:16.712835   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:44:16.712990   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:44:16.713005   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:44:16.713041   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:44:16.713154   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:44:16.713168   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:44:16.713201   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:44:16.713291   45269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-732472 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube old-k8s-version-732472]
	I1128 00:44:17.255079   45269 provision.go:172] copyRemoteCerts
	I1128 00:44:17.255157   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:44:17.255184   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.258078   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258486   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.258522   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258704   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.258892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.259071   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.259278   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.360891   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:44:14.981992   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.984334   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.809569   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:16.837545   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:16.884377   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:16.901252   45815 system_pods.go:59] 9 kube-system pods found
	I1128 00:44:16.901296   45815 system_pods.go:61] "coredns-76f75df574-54p94" [fc2580d3-8c03-46c8-aa43-fce9472a4bbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901310   45815 system_pods.go:61] "coredns-76f75df574-9ptz7" [c51a1796-37bb-411b-8477-fb4d8c7e7cb2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901322   45815 system_pods.go:61] "etcd-no-preload-473615" [c789418f-23b1-4e84-95df-e339afc358e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:16.901337   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [204c5f02-7e14-4761-9af0-606f227dee63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:16.901351   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [2d96a78f-b0c9-4731-a8a1-ec63459a09ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:16.901368   45815 system_pods.go:61] "kube-proxy-trr4j" [df593d3d-db4c-45f9-ad79-f35fe2cdef84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:16.901379   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [5fe2c87b-af8b-4184-8b62-399e488dcb5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:16.901393   45815 system_pods.go:61] "metrics-server-57f55c9bc5-lh4m8" [4c3ae55b-befb-44d2-8982-592acdf3eab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:16.901408   45815 system_pods.go:61] "storage-provisioner" [a3e71dd4-570e-4895-aac4-d98dfbd69a6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:16.901423   45815 system_pods.go:74] duration metric: took 17.023663ms to wait for pod list to return data ...
	I1128 00:44:16.901434   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:16.905738   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:16.905766   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:16.905776   45815 node_conditions.go:105] duration metric: took 4.335236ms to run NodePressure ...
	I1128 00:44:16.905791   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:17.532813   45815 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548788   45815 kubeadm.go:787] kubelet initialised
	I1128 00:44:17.548814   45815 kubeadm.go:788] duration metric: took 15.969396ms waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548824   45815 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:17.569590   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:17.388160   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:44:17.415589   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:44:17.443880   45269 provision.go:86] duration metric: configureAuth took 738.257631ms
	I1128 00:44:17.443913   45269 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:44:17.444142   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:44:17.444240   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.447355   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447699   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.447726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447980   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.448213   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448382   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448542   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.448730   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.449148   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.449173   45269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:44:17.825162   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:44:17.825202   45269 machine.go:91] provisioned docker machine in 1.445550198s
	I1128 00:44:17.825215   45269 start.go:300] post-start starting for "old-k8s-version-732472" (driver="kvm2")
	I1128 00:44:17.825229   45269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:44:17.825255   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:17.825631   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:44:17.825665   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.829047   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.829813   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829885   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.830108   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.830270   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.830427   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.933926   45269 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:44:17.939164   45269 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:44:17.939192   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:44:17.939273   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:44:17.939364   45269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:44:17.939481   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:44:17.950901   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:17.983827   45269 start.go:303] post-start completed in 158.593642ms
	I1128 00:44:17.983856   45269 fix.go:56] fixHost completed within 21.237897087s
	I1128 00:44:17.983880   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.988473   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.988983   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.989011   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.989353   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.989611   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989755   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989981   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.990202   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.990729   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.990748   45269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:44:18.139114   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132258.087547922
	
	I1128 00:44:18.139142   45269 fix.go:206] guest clock: 1701132258.087547922
	I1128 00:44:18.139154   45269 fix.go:219] Guest: 2023-11-28 00:44:18.087547922 +0000 UTC Remote: 2023-11-28 00:44:17.983860571 +0000 UTC m=+360.654750753 (delta=103.687351ms)
	I1128 00:44:18.139206   45269 fix.go:190] guest clock delta is within tolerance: 103.687351ms
	I1128 00:44:18.139217   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 21.393285553s
	I1128 00:44:18.139256   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.139552   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:18.142899   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.143407   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143562   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144123   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144308   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144414   45269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:44:18.144473   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.144586   45269 ssh_runner.go:195] Run: cat /version.json
	I1128 00:44:18.144614   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.147761   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.147994   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148459   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148542   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148581   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148605   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148878   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.148892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.149080   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149094   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149266   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149288   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149473   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.149488   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.271569   45269 ssh_runner.go:195] Run: systemctl --version
	I1128 00:44:18.277814   45269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:44:18.432301   45269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:44:18.438677   45269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:44:18.438749   45269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:44:18.455128   45269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:44:18.455155   45269 start.go:472] detecting cgroup driver to use...
	I1128 00:44:18.455229   45269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:44:18.472928   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:44:18.490329   45269 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:44:18.490409   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:44:18.505705   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:44:18.523509   45269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:44:18.696691   45269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:44:18.829641   45269 docker.go:219] disabling docker service ...
	I1128 00:44:18.829775   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:44:18.847903   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:44:18.863690   45269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:44:19.002181   45269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:44:19.130955   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:44:19.146034   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:44:19.165714   45269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 00:44:19.165790   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.176303   45269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:44:19.176368   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.186698   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.196137   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.205054   45269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:44:19.215067   45269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:44:19.224332   45269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:44:19.224376   45269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:44:19.238079   45269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:44:19.246692   45269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:44:19.360913   45269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:44:19.548488   45269 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:44:19.548563   45269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:44:19.553293   45269 start.go:540] Will wait 60s for crictl version
	I1128 00:44:19.553362   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:19.557103   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:44:19.605572   45269 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:44:19.605662   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.655808   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.709415   45269 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1128 00:44:17.346814   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.847354   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.878161   46126 api_server.go:72] duration metric: took 2.560990106s to wait for apiserver process to appear ...
	I1128 00:44:17.878189   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:17.878218   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.878696   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:17.878732   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.879110   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:18.379800   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:19.710653   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:19.713912   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714358   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:19.714402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714586   45269 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:44:19.719516   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:19.736367   45269 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 00:44:19.736422   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:19.788917   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:19.789021   45269 ssh_runner.go:195] Run: which lz4
	I1128 00:44:19.793502   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:44:19.797933   45269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:44:19.797967   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1128 00:44:21.595649   45269 crio.go:444] Took 1.802185 seconds to copy over tarball
	I1128 00:44:21.595754   45269 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:44:19.483696   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:21.485632   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:19.612824   45815 pod_ready.go:102] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:22.111469   45815 pod_ready.go:92] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.111506   45815 pod_ready.go:81] duration metric: took 4.541884744s waiting for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.111522   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118896   45815 pod_ready.go:92] pod "coredns-76f75df574-9ptz7" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.118916   45815 pod_ready.go:81] duration metric: took 7.386009ms waiting for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118925   45815 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.651574   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.651606   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.651632   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.731086   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.731124   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.879396   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.889686   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:22.889721   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.380219   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.387416   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.387458   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.880170   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.886215   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.886286   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:24.380095   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:24.387531   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:44:24.411131   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:44:24.411169   46126 api_server.go:131] duration metric: took 6.532961174s to wait for apiserver health ...
	I1128 00:44:24.411180   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:24.411186   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:24.701599   46126 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:24.853101   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:24.878687   46126 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:24.924669   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:24.942030   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:44:24.942063   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:24.942074   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:24.942084   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:24.942094   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:24.942104   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:24.942115   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:24.942134   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:24.942152   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:24.942163   46126 system_pods.go:74] duration metric: took 17.475554ms to wait for pod list to return data ...
	I1128 00:44:24.942224   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:26.037379   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:26.037423   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:26.037450   46126 node_conditions.go:105] duration metric: took 1.095218932s to run NodePressure ...
	I1128 00:44:26.037473   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:27.084620   46126 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.047120714s)
	I1128 00:44:27.084659   46126 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100248   46126 kubeadm.go:787] kubelet initialised
	I1128 00:44:27.100282   46126 kubeadm.go:788] duration metric: took 15.606572ms waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100293   46126 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:27.108069   46126 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.117188   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117221   46126 pod_ready.go:81] duration metric: took 9.127662ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.117238   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117247   46126 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.123182   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123213   46126 pod_ready.go:81] duration metric: took 5.9547ms waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.123226   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123235   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.130170   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130196   46126 pod_ready.go:81] duration metric: took 6.952194ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.130209   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130216   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.136895   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136925   46126 pod_ready.go:81] duration metric: took 6.699975ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.136940   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136950   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:24.811723   45269 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.215918902s)
	I1128 00:44:24.811757   45269 crio.go:451] Took 3.216081 seconds to extract the tarball
	I1128 00:44:24.811769   45269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:24.856120   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:24.918138   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:24.918185   45269 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:44:24.918257   45269 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.918296   45269 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.918305   45269 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1128 00:44:24.918314   45269 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.918297   45269 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.918261   45269 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.918264   45269 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.918585   45269 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.919955   45269 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.919959   45269 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.919988   45269 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.919964   45269 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.920093   45269 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.920302   45269 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.920482   45269 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.920497   45269 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1128 00:44:25.041095   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.048823   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.071401   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1128 00:44:25.073489   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.081089   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.083887   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.100582   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.150855   45269 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1128 00:44:25.150909   45269 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.150960   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.151148   45269 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1128 00:44:25.151198   45269 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.151250   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.181984   45269 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1128 00:44:25.182039   45269 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1128 00:44:25.182089   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.260634   45269 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1128 00:44:25.260687   45269 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.260744   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269386   45269 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1128 00:44:25.269436   45269 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1128 00:44:25.269460   45269 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.269480   45269 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.269508   45269 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1128 00:44:25.269517   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269539   45269 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.269552   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269573   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269626   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.269642   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.269701   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1128 00:44:25.269733   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.368354   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1128 00:44:25.368405   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1128 00:44:25.368462   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1128 00:44:25.368474   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.368536   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.368537   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.375204   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.375378   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1128 00:44:25.439797   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1128 00:44:25.465699   45269 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1128 00:44:25.465731   45269 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465788   45269 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465795   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1128 00:44:25.465810   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1128 00:44:25.797872   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:27.031275   45269 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.233351991s)
	I1128 00:44:27.031525   45269 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.565711109s)
	I1128 00:44:27.031549   45269 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1128 00:44:27.031594   45269 cache_images.go:92] LoadImages completed in 2.113388877s
	W1128 00:44:27.031667   45269 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1128 00:44:27.031754   45269 ssh_runner.go:195] Run: crio config
	I1128 00:44:27.100851   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:27.100882   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:27.100901   45269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:27.100924   45269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-732472 NodeName:old-k8s-version-732472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1128 00:44:27.101119   45269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-732472"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-732472
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.172:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:27.101241   45269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-732472 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:44:27.101312   45269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1128 00:44:27.111964   45269 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:27.112049   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:27.122796   45269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1128 00:44:27.149768   45269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:27.168520   45269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1128 00:44:27.187296   45269 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:27.191606   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:27.205482   45269 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472 for IP: 192.168.39.172
	I1128 00:44:27.205521   45269 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:27.205720   45269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:27.205758   45269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:27.205825   45269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.key
	I1128 00:44:27.205885   45269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key.ee96354a
	I1128 00:44:27.205931   45269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key
	I1128 00:44:27.206060   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:27.206115   45269 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:27.206130   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:27.206176   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:27.206214   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:27.206251   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:27.206313   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:27.207009   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:27.233932   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:27.258138   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:27.282203   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:27.309304   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:27.335945   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:27.360118   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:23.984808   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.118398   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:27.491683   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491724   46126 pod_ready.go:81] duration metric: took 354.756767ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.491736   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491745   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.890269   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890299   46126 pod_ready.go:81] duration metric: took 398.544263ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.890316   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890324   46126 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.289016   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289043   46126 pod_ready.go:81] duration metric: took 398.709637ms waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:28.289055   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289062   46126 pod_ready.go:38] duration metric: took 1.188759196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:28.289084   46126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:44:28.301648   46126 ops.go:34] apiserver oom_adj: -16
	I1128 00:44:28.301676   46126 kubeadm.go:640] restartCluster took 24.277487612s
	I1128 00:44:28.301683   46126 kubeadm.go:406] StartCluster complete in 24.339149368s
	I1128 00:44:28.301697   46126 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.301770   46126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:44:28.303560   46126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.303802   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:44:28.303915   46126 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:44:28.303994   46126 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304023   46126 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304038   46126 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:44:28.304040   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:44:28.304063   46126 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304117   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304118   46126 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304134   46126 addons.go:240] addon metrics-server should already be in state true
	I1128 00:44:28.304220   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304547   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304589   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304669   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304741   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304928   46126 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304956   46126 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-488423"
	I1128 00:44:28.305388   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.305437   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.310450   46126 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-488423" context rescaled to 1 replicas
	I1128 00:44:28.310496   46126 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:44:28.312602   46126 out.go:177] * Verifying Kubernetes components...
	I1128 00:44:28.314027   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:44:28.321407   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I1128 00:44:28.321423   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I1128 00:44:28.322247   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322287   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322797   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322820   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.322942   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322968   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.323210   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323242   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I1128 00:44:28.323323   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323556   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.323775   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323818   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323857   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323891   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323937   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.323957   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.324293   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.324471   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.327954   46126 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.327972   46126 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:44:28.327993   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.328327   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.328355   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.342376   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I1128 00:44:28.342789   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.343325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.343366   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.343751   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.343978   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I1128 00:44:28.343995   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.344392   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.344983   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.345009   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.345366   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.345910   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.348242   46126 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:28.346449   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I1128 00:44:28.350126   46126 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.350147   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:44:28.350166   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.346666   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.350250   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.348589   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.350911   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.350930   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.351442   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.351817   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.353691   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.353876   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.355460   46126 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:44:24.141365   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.518655   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.887843   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.887877   45815 pod_ready.go:81] duration metric: took 4.768943982s waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.887891   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909504   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.909600   45815 pod_ready.go:81] duration metric: took 21.699474ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909627   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.354335   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.354504   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.357068   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:44:28.357088   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:44:28.357094   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.357109   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.357228   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.357356   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.357475   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.360015   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360725   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.360785   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360994   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.361177   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.361341   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.361503   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.368150   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I1128 00:44:28.368511   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.369005   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.369023   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.369326   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.369481   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.370807   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.371066   46126 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.371078   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:44:28.371092   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.373819   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374409   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.374510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.374541   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374602   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.374688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.374768   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.474380   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.505183   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:44:28.505206   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:44:28.536550   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.584832   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:44:28.584857   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:44:28.626477   46126 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 00:44:28.626473   46126 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:28.644406   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:28.644436   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:44:28.671872   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:29.867337   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330746736s)
	I1128 00:44:29.867437   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867451   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867490   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.393076585s)
	I1128 00:44:29.867532   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867553   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867827   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.867841   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.867850   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867988   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868006   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868029   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.868038   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.868129   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.868145   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868159   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868381   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868400   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868429   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.876482   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.876505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.876724   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.876736   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885484   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213575767s)
	I1128 00:44:29.885534   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885841   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.885862   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885873   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885883   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885887   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886153   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886164   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.886194   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.886211   46126 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-488423"
	I1128 00:44:29.889173   46126 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:44:29.890607   46126 addons.go:502] enable addons completed in 1.586699021s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:44:30.716680   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.385529   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:27.411354   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:27.439142   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:27.466763   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:27.497738   45269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:27.518132   45269 ssh_runner.go:195] Run: openssl version
	I1128 00:44:27.524720   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:27.537673   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542561   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542623   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.548137   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:27.558112   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:27.568318   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573638   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573697   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.579739   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:27.589908   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:27.599937   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606264   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606340   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.612850   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:27.623388   45269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:27.628140   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:27.634670   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:27.642071   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:27.650207   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:27.656836   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:27.662837   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:27.668909   45269 kubeadm.go:404] StartCluster: {Name:old-k8s-version-732472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:27.669005   45269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:27.669075   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:27.711918   45269 cri.go:89] found id: ""
	I1128 00:44:27.711993   45269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:27.722058   45269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:27.722084   45269 kubeadm.go:636] restartCluster start
	I1128 00:44:27.722146   45269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:27.731619   45269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.733224   45269 kubeconfig.go:92] found "old-k8s-version-732472" server: "https://192.168.39.172:8443"
	I1128 00:44:27.736867   45269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:27.747794   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.747862   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.762055   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.762079   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.762146   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.773241   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.273910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.274001   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.286159   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.773393   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.773492   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.785781   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.274130   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.274199   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.289388   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.773916   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.774022   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.789483   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.273920   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.274026   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.285579   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.773910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.774005   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.785536   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.273906   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.273977   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.285344   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.774284   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.774352   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.786435   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.273928   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.274008   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.289424   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.484735   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.983088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:28.945293   45815 pod_ready.go:102] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.445111   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.445133   45815 pod_ready.go:81] duration metric: took 3.535488087s waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.445143   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450322   45815 pod_ready.go:92] pod "kube-proxy-trr4j" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.450342   45815 pod_ready.go:81] duration metric: took 5.193276ms waiting for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450350   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455002   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.455021   45815 pod_ready.go:81] duration metric: took 4.664949ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455030   45815 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:32.915566   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.717086   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:33.216905   46126 node_ready.go:49] node "default-k8s-diff-port-488423" has status "Ready":"True"
	I1128 00:44:33.216930   46126 node_ready.go:38] duration metric: took 4.590426391s waiting for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:33.216938   46126 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:33.223257   46126 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744567   46126 pod_ready.go:92] pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:33.744592   46126 pod_ready.go:81] duration metric: took 521.313062ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744601   46126 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:35.763867   46126 pod_ready.go:102] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.773549   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.773643   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.785461   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.273911   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.273994   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.285646   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.773944   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.774046   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.786576   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.273902   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.273969   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.285791   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.773895   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.773965   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.785934   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.273675   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.273738   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.285549   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.773954   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.774041   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.786010   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.273591   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.273659   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.284794   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.773864   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.773931   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.786610   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:37.273899   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:37.274025   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:37.285678   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.983159   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:34.985149   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.482210   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:35.413821   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.417790   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.768358   46126 pod_ready.go:92] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.768398   46126 pod_ready.go:81] duration metric: took 4.023788643s waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.768411   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775805   46126 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.775835   46126 pod_ready.go:81] duration metric: took 7.41435ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775847   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788110   46126 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.788139   46126 pod_ready.go:81] duration metric: took 12.28235ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788151   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018402   46126 pod_ready.go:92] pod "kube-proxy-2sfbm" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.018426   46126 pod_ready.go:81] duration metric: took 230.267334ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018443   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818531   46126 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.818559   46126 pod_ready.go:81] duration metric: took 800.108369ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818572   46126 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:41.127953   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.748214   45269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:37.748260   45269 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:37.748276   45269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:37.748334   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:37.796781   45269 cri.go:89] found id: ""
	I1128 00:44:37.796866   45269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:37.814832   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:37.824395   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:37.824469   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833592   45269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833618   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:37.955071   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:38.939529   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.160852   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.243789   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.372434   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:39.372525   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.405594   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.927024   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.426600   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.927163   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.966905   45269 api_server.go:72] duration metric: took 1.594470962s to wait for apiserver process to appear ...
	I1128 00:44:40.966937   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:40.966959   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967412   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:40.967457   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967851   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:41.468536   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:39.483204   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:41.483578   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:39.914738   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:42.415305   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:43.130157   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:45.628970   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.468813   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1128 00:44:46.468859   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:43.984318   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.483855   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:44.914911   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.415274   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.435553   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.435586   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.435601   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.480977   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.481002   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.481012   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.506064   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.506098   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.968355   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.974731   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:47.974766   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.468954   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.484597   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:48.484627   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.968810   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.979310   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:44:48.987751   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:44:48.987782   45269 api_server.go:131] duration metric: took 8.020836981s to wait for apiserver health ...
	I1128 00:44:48.987793   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:48.987801   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:48.989720   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:48.129394   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:50.130239   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:48.991320   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:49.001231   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:49.019895   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:49.027389   45269 system_pods.go:59] 7 kube-system pods found
	I1128 00:44:49.027417   45269 system_pods.go:61] "coredns-5644d7b6d9-9sh7z" [dcc226fb-5fd9-4757-bd93-1113f185cdce] Running
	I1128 00:44:49.027422   45269 system_pods.go:61] "etcd-old-k8s-version-732472" [a5899a5a-4812-41e1-9251-78fdaeea9597] Running
	I1128 00:44:49.027428   45269 system_pods.go:61] "kube-apiserver-old-k8s-version-732472" [13d2df8c-84a3-4bd4-8eab-ed9f732a3839] Running
	I1128 00:44:49.027435   45269 system_pods.go:61] "kube-controller-manager-old-k8s-version-732472" [6dc1e479-1a3a-4b9e-acd6-1183a25aece4] Running
	I1128 00:44:49.027441   45269 system_pods.go:61] "kube-proxy-jqrks" [e8fd665a-099e-4941-a8f2-917d2b864eeb] Running
	I1128 00:44:49.027447   45269 system_pods.go:61] "kube-scheduler-old-k8s-version-732472" [de147a31-927e-4051-b6ae-05ddf59290c8] Running
	I1128 00:44:49.027457   45269 system_pods.go:61] "storage-provisioner" [8d7e725e-6c26-4435-8605-88c7d924f5ca] Running
	I1128 00:44:49.027469   45269 system_pods.go:74] duration metric: took 7.544096ms to wait for pod list to return data ...
	I1128 00:44:49.027479   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:49.032133   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:49.032170   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:49.032183   45269 node_conditions.go:105] duration metric: took 4.695493ms to run NodePressure ...
	I1128 00:44:49.032203   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:49.293443   45269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:49.297880   45269 retry.go:31] will retry after 216.894607ms: kubelet not initialised
	I1128 00:44:49.528912   45269 retry.go:31] will retry after 354.406288ms: kubelet not initialised
	I1128 00:44:49.897328   45269 retry.go:31] will retry after 462.959721ms: kubelet not initialised
	I1128 00:44:50.368260   45269 retry.go:31] will retry after 930.99638ms: kubelet not initialised
	I1128 00:44:51.303993   45269 retry.go:31] will retry after 1.275477572s: kubelet not initialised
	I1128 00:44:48.984387   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:51.482900   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:49.916072   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.415253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.626182   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.626822   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:56.627881   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.584797   45269 retry.go:31] will retry after 2.542158001s: kubelet not initialised
	I1128 00:44:55.132600   45269 retry.go:31] will retry after 1.850404606s: kubelet not initialised
	I1128 00:44:56.987924   45269 retry.go:31] will retry after 2.371310185s: kubelet not initialised
	I1128 00:44:53.483557   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:55.982236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.916135   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:57.415818   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.127409   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:01.629561   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.366141   45269 retry.go:31] will retry after 8.068803464s: kubelet not initialised
	I1128 00:44:57.983189   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:00.482336   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.483708   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.915991   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.414672   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.127296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.127766   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.484008   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.983257   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.415147   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.914282   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.128322   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:10.627792   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:07.439538   45269 retry.go:31] will retry after 10.31431504s: kubelet not initialised
	I1128 00:45:08.985186   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.481933   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.914385   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.414899   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:12.628874   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:14.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.126592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.487653   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.983710   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.915497   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.915686   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:18.416396   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:19.127337   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:21.128352   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.759682   45269 retry.go:31] will retry after 12.137072248s: kubelet not initialised
	I1128 00:45:18.482187   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.982360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.915228   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.918669   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:23.630252   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.128326   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.982597   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:24.983348   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.985418   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:25.415620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:27.914150   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:28.626533   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:30.633655   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.902379   45269 kubeadm.go:787] kubelet initialised
	I1128 00:45:29.902403   45269 kubeadm.go:788] duration metric: took 40.608931816s waiting for restarted kubelet to initialise ...
	I1128 00:45:29.902410   45269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:45:29.908442   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914018   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.914055   45269 pod_ready.go:81] duration metric: took 5.584146ms waiting for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914069   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918699   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.918720   45269 pod_ready.go:81] duration metric: took 4.644035ms waiting for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918729   45269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922818   45269 pod_ready.go:92] pod "etcd-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.922837   45269 pod_ready.go:81] duration metric: took 4.102217ms waiting for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922846   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927182   45269 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.927208   45269 pod_ready.go:81] duration metric: took 4.354519ms waiting for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927220   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301553   45269 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.301583   45269 pod_ready.go:81] duration metric: took 374.352863ms waiting for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301611   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700858   45269 pod_ready.go:92] pod "kube-proxy-jqrks" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.700879   45269 pod_ready.go:81] duration metric: took 399.260896ms waiting for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700890   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103319   45269 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:31.103340   45269 pod_ready.go:81] duration metric: took 402.442769ms waiting for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103349   45269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.482088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:31.483235   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.915117   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:32.416142   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.127196   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.127500   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.128846   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.422466   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.908596   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.983360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.983776   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:34.417575   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:36.915005   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.627473   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.126292   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.908783   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.909842   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.910185   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:38.481697   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:40.481935   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.483458   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.415244   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.127088   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.128254   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.409802   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.415828   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.986515   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:47.483162   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.414253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.416386   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.628705   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:51.130754   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.908171   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.910974   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:49.985617   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:52.483720   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.915063   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.915382   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.414813   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.627668   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.409415   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.420993   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:54.983055   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:56.983251   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.919627   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.415481   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.129666   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.629368   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:57.910151   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.408805   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:59.485375   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:01.983754   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.413478   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.129933   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.627697   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:02.410888   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.910323   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.482593   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:06.981922   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.414437   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.415659   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.628741   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.126717   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:12.127246   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.408374   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.411381   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.416658   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:08.982790   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.984134   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.914828   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.915812   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.135673   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.626139   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.909480   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.409873   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.481792   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:15.482823   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.416315   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.914123   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.628828   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.131592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.411060   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.910071   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:17.983098   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.482047   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:22.483266   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:19.413826   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.415442   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.626664   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.626823   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.424355   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.908255   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:24.984606   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.482265   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.915227   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:26.417059   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.628773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.126818   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.911487   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.409652   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:29.485507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.983913   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:28.916438   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.415565   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.626887   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.628401   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.128691   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.910776   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.421469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.482605   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:36.982844   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:33.913533   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.914337   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.914708   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.627072   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.627591   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.908233   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.910199   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:38.983620   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.482862   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.914965   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.915003   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.628492   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.127393   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:42.408895   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:44.409264   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.909077   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.483111   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:45.483236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.916039   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.415407   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.627253   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.127503   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.418512   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.427899   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:47.982977   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.983264   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.483168   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.914124   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:50.915620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.919567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.627296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.627334   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.908531   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:56.408610   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:54.983084   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.481889   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.414154   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.416518   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.126605   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.127372   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:02.127896   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.410152   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.910206   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.482177   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.982997   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.915381   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.915574   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.626760   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.628849   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.417243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.417887   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.983490   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.984161   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.414677   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.420179   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:09.127843   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:11.626987   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:07.908838   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.408385   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.482404   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.484146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.914093   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.922145   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.417231   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.627586   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.628294   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.410728   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.910177   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:16.910469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.982123   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.984037   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:17.483771   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.915323   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.415070   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.129617   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.628266   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.423065   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:21.908978   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.983122   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.482857   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.415232   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.915218   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.129285   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:25.627839   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.910794   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:26.409956   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.985146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.482512   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.916041   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.415836   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.627978   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.127213   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:32.127569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:28.413035   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.909092   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.483528   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.983745   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.913604   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.914567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.129952   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.626951   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:33.414345   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:35.414559   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.481916   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.482024   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.413520   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.414517   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.416081   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.627773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:41.126690   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:37.414665   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:39.908876   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.482323   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.983125   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.914615   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.415528   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.128692   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.627228   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:42.412788   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:44.909732   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:46.910133   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.482424   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.482507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.482562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.416841   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.914229   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:48.127074   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.627355   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.411030   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.420657   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.483765   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.982325   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.414235   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.414715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.627557   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:54.628111   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:57.129482   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.910232   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.409320   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.795074   45580 pod_ready.go:81] duration metric: took 4m0.000752019s waiting for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	E1128 00:47:53.795108   45580 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:47:53.795124   45580 pod_ready.go:38] duration metric: took 4m9.844437599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:47:53.795148   45580 kubeadm.go:640] restartCluster took 4m29.759592783s
	W1128 00:47:53.795209   45580 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:47:53.795237   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:47:54.416610   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.915781   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:59.129569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.627046   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.409599   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:00.409906   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.916155   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.416966   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:03.627676   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.126607   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:02.410451   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:04.411074   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.912243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.609428   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.814163406s)
	I1128 00:48:07.609508   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:07.624300   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:07.634606   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:07.644733   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:07.644802   45580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:03.915780   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.416602   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:08.128657   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:10.629487   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:09.411193   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.908147   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.867577   45580 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:08.915404   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.416668   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.129233   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.630498   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.909762   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:16.409160   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.916628   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.916715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:17.917022   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.126081   45580 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 00:48:19.126157   45580 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:19.126245   45580 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:19.126356   45580 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:19.126476   45580 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:19.126544   45580 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:19.128354   45580 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:19.128461   45580 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:19.128546   45580 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:19.128664   45580 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:19.128807   45580 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:19.128927   45580 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:19.129001   45580 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:19.129100   45580 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:19.129175   45580 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:19.129275   45580 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:19.129387   45580 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:19.129432   45580 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:19.129501   45580 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:19.129559   45580 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:19.129616   45580 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:19.129696   45580 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:19.129760   45580 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:19.129853   45580 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:19.129921   45580 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:19.131350   45580 out.go:204]   - Booting up control plane ...
	I1128 00:48:19.131462   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:19.131578   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:19.131674   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:19.131798   45580 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:19.131914   45580 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:19.131972   45580 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:19.132149   45580 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:19.132245   45580 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502916 seconds
	I1128 00:48:19.132388   45580 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:19.132540   45580 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:19.132619   45580 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:19.132850   45580 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-304541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:19.132959   45580 kubeadm.go:322] [bootstrap-token] Using token: tbyyd7.r005gkl9z2ll5pno
	I1128 00:48:19.134488   45580 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:19.134603   45580 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:19.134691   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:19.134841   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:19.135030   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:19.135200   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:19.135311   45580 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:19.135453   45580 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:19.135532   45580 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:19.135600   45580 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:19.135611   45580 kubeadm.go:322] 
	I1128 00:48:19.135692   45580 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:19.135700   45580 kubeadm.go:322] 
	I1128 00:48:19.135798   45580 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:19.135807   45580 kubeadm.go:322] 
	I1128 00:48:19.135840   45580 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:19.135916   45580 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:19.135987   45580 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:19.135996   45580 kubeadm.go:322] 
	I1128 00:48:19.136074   45580 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:19.136084   45580 kubeadm.go:322] 
	I1128 00:48:19.136153   45580 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:19.136161   45580 kubeadm.go:322] 
	I1128 00:48:19.136231   45580 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:19.136329   45580 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:19.136439   45580 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:19.136448   45580 kubeadm.go:322] 
	I1128 00:48:19.136552   45580 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:19.136662   45580 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:19.136674   45580 kubeadm.go:322] 
	I1128 00:48:19.136766   45580 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.136878   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:19.136907   45580 kubeadm.go:322] 	--control-plane 
	I1128 00:48:19.136913   45580 kubeadm.go:322] 
	I1128 00:48:19.136986   45580 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:19.136998   45580 kubeadm.go:322] 
	I1128 00:48:19.137097   45580 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.137259   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:19.137282   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:48:19.137290   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:19.138890   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:18.126502   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.131785   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:18.410659   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.910338   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.140172   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:19.160540   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:19.224333   45580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:19.224409   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.224455   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=embed-certs-304541 minikube.k8s.io/updated_at=2023_11_28T00_48_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.301346   45580 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:19.544274   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.656215   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.257645   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.757476   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.257246   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.757278   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.256655   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.757282   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.415051   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.914901   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.627184   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:24.627388   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.127311   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.409417   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:25.909086   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.257594   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:23.757135   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.257396   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.757508   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.257426   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.756605   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.256768   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.756656   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.256783   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.756856   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.414964   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.415763   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:28.257005   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:28.756875   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.256833   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.757261   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.257313   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.756918   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.257535   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.757356   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.917284   45580 kubeadm.go:1081] duration metric: took 12.692941702s to wait for elevateKubeSystemPrivileges.
	I1128 00:48:31.917326   45580 kubeadm.go:406] StartCluster complete in 5m7.933075195s
	I1128 00:48:31.917353   45580 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.917430   45580 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:48:31.919940   45580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.920855   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:48:31.921063   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:48:31.921037   45580 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:48:31.921110   45580 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-304541"
	I1128 00:48:31.921123   45580 addons.go:69] Setting default-storageclass=true in profile "embed-certs-304541"
	I1128 00:48:31.921143   45580 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-304541"
	I1128 00:48:31.921148   45580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-304541"
	W1128 00:48:31.921152   45580 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:48:31.921116   45580 addons.go:69] Setting metrics-server=true in profile "embed-certs-304541"
	I1128 00:48:31.921213   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921220   45580 addons.go:231] Setting addon metrics-server=true in "embed-certs-304541"
	W1128 00:48:31.921229   45580 addons.go:240] addon metrics-server should already be in state true
	I1128 00:48:31.921265   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921531   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921545   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921578   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921584   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921594   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921605   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.941345   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I1128 00:48:31.941374   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I1128 00:48:31.941359   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I1128 00:48:31.942009   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942040   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942449   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942460   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942477   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942488   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942549   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942937   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942955   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.943129   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943134   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943300   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943646   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.943671   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.943774   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.944439   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.944470   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.947579   45580 addons.go:231] Setting addon default-storageclass=true in "embed-certs-304541"
	W1128 00:48:31.947605   45580 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:48:31.947635   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.948083   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.948114   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.964906   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1128 00:48:31.964942   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1128 00:48:31.966157   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966261   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966778   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966795   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.966980   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966999   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.967444   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967481   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I1128 00:48:31.967447   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967636   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968331   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.968434   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968812   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.968830   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.969729   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972519   45580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:48:31.970271   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972982   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.974461   45580 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:31.974479   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:48:31.974498   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.976187   45580 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:48:31.974991   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.977660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.977907   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:48:31.977925   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:48:31.977943   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.978001   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.978243   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.978264   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.978506   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.978727   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.978954   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.979170   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.980878   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981226   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.981262   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981399   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.981571   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.981690   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.981810   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.997812   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I1128 00:48:31.998404   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.998989   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.999016   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.999427   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.999652   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:32.001212   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:32.001482   45580 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.001496   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:48:32.001513   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:32.002981   45580 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-304541" context rescaled to 1 replicas
	I1128 00:48:32.003019   45580 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:48:32.005961   45580 out.go:177] * Verifying Kubernetes components...
	I1128 00:48:29.127403   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:31.127830   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.911587   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.411923   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.004640   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.005211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:32.007586   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:32.007585   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:32.007700   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.007722   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:32.007894   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:32.008049   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:32.213297   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:48:32.213322   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:48:32.255646   45580 node_ready.go:35] waiting up to 6m0s for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.255743   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:48:32.268542   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.270044   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:48:32.270066   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:48:32.304458   45580 node_ready.go:49] node "embed-certs-304541" has status "Ready":"True"
	I1128 00:48:32.304486   45580 node_ready.go:38] duration metric: took 48.802082ms waiting for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.304498   45580 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:32.320550   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:32.437814   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:32.437852   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:48:32.462274   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:32.541622   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:29.418692   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.455152   45815 pod_ready.go:81] duration metric: took 4m0.000108261s waiting for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:30.455199   45815 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:30.455216   45815 pod_ready.go:38] duration metric: took 4m12.906382743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:30.455251   45815 kubeadm.go:640] restartCluster took 4m33.513232005s
	W1128 00:48:30.455312   45815 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:48:30.455356   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:48:34.327113   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.071322786s)
	I1128 00:48:34.327155   45580 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 00:48:34.342711   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.074127133s)
	I1128 00:48:34.342776   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.342791   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.343284   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343328   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.343339   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.343348   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343581   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343598   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.366719   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.366754   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.367052   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.367104   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.367119   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.467705   45580 pod_ready.go:102] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.935662   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473338078s)
	I1128 00:48:34.935745   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.935814   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936143   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.936184   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936193   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.936203   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.936211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936435   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936482   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977248   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.435573064s)
	I1128 00:48:34.977318   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977345   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.977738   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.977785   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.977806   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977824   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.979823   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.979823   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.979849   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.979860   45580 addons.go:467] Verifying addon metrics-server=true in "embed-certs-304541"
	I1128 00:48:34.981768   45580 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:48:33.129597   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.129880   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.912875   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.411225   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.983440   45580 addons.go:502] enable addons completed in 3.062399778s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:48:36.495977   45580 pod_ready.go:92] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.496002   45580 pod_ready.go:81] duration metric: took 4.175421265s waiting for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.496012   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508269   45580 pod_ready.go:92] pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.508293   45580 pod_ready.go:81] duration metric: took 12.274473ms waiting for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508302   45580 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515826   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.515855   45580 pod_ready.go:81] duration metric: took 7.545794ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515873   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523206   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.523271   45580 pod_ready.go:81] duration metric: took 7.388614ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523286   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529859   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.529881   45580 pod_ready.go:81] duration metric: took 6.58575ms waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529889   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857435   45580 pod_ready.go:92] pod "kube-proxy-w5ct2" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.857467   45580 pod_ready.go:81] duration metric: took 327.570428ms waiting for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857481   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257433   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:37.257455   45580 pod_ready.go:81] duration metric: took 399.966903ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257462   45580 pod_ready.go:38] duration metric: took 4.952954771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:37.257476   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:37.257523   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:37.275627   45580 api_server.go:72] duration metric: took 5.272574466s to wait for apiserver process to appear ...
	I1128 00:48:37.275656   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:37.275673   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:48:37.283884   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:48:37.285716   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:37.285744   45580 api_server.go:131] duration metric: took 10.080776ms to wait for apiserver health ...
	I1128 00:48:37.285766   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:37.460530   45580 system_pods.go:59] 9 kube-system pods found
	I1128 00:48:37.460555   45580 system_pods.go:61] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.460560   45580 system_pods.go:61] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.460563   45580 system_pods.go:61] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.460568   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.460572   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.460575   45580 system_pods.go:61] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.460579   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.460585   45580 system_pods.go:61] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.460589   45580 system_pods.go:61] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.460597   45580 system_pods.go:74] duration metric: took 174.824783ms to wait for pod list to return data ...
	I1128 00:48:37.460619   45580 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:37.656404   45580 default_sa.go:45] found service account: "default"
	I1128 00:48:37.656431   45580 default_sa.go:55] duration metric: took 195.805836ms for default service account to be created ...
	I1128 00:48:37.656444   45580 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:37.861049   45580 system_pods.go:86] 9 kube-system pods found
	I1128 00:48:37.861086   45580 system_pods.go:89] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.861095   45580 system_pods.go:89] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.861101   45580 system_pods.go:89] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.861108   45580 system_pods.go:89] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.861116   45580 system_pods.go:89] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.861122   45580 system_pods.go:89] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.861128   45580 system_pods.go:89] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.861140   45580 system_pods.go:89] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.861157   45580 system_pods.go:89] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.861171   45580 system_pods.go:126] duration metric: took 204.720501ms to wait for k8s-apps to be running ...
	I1128 00:48:37.861187   45580 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:37.861241   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:37.875344   45580 system_svc.go:56] duration metric: took 14.150294ms WaitForService to wait for kubelet.
	I1128 00:48:37.875380   45580 kubeadm.go:581] duration metric: took 5.872335245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:37.875407   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:38.057075   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:38.057106   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:38.057117   45580 node_conditions.go:105] duration metric: took 181.705529ms to run NodePressure ...
	I1128 00:48:38.057127   45580 start.go:228] waiting for startup goroutines ...
	I1128 00:48:38.057133   45580 start.go:233] waiting for cluster config update ...
	I1128 00:48:38.057141   45580 start.go:242] writing updated cluster config ...
	I1128 00:48:38.057366   45580 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:38.107014   45580 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:38.109071   45580 out.go:177] * Done! kubectl is now configured to use "embed-certs-304541" cluster and "default" namespace by default
	I1128 00:48:37.626062   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:38.819130   46126 pod_ready.go:81] duration metric: took 4m0.000531461s waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:38.819159   46126 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:38.819168   46126 pod_ready.go:38] duration metric: took 4m5.602220781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:38.819189   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:38.819216   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:38.819269   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:38.882052   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:38.882075   46126 cri.go:89] found id: ""
	I1128 00:48:38.882084   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:38.882143   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.886688   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:38.886751   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:38.926163   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:38.926190   46126 cri.go:89] found id: ""
	I1128 00:48:38.926197   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:38.926259   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.930505   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:38.930558   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:38.979793   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:38.979816   46126 cri.go:89] found id: ""
	I1128 00:48:38.979823   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:38.979876   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.984146   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:38.984244   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:39.033485   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:39.033509   46126 cri.go:89] found id: ""
	I1128 00:48:39.033519   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:39.033575   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.038977   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:39.039038   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:39.079669   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:39.079697   46126 cri.go:89] found id: ""
	I1128 00:48:39.079707   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:39.079767   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.084447   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:39.084515   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:39.121494   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:39.121523   46126 cri.go:89] found id: ""
	I1128 00:48:39.121533   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:39.121594   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.126495   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:39.126554   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:39.168822   46126 cri.go:89] found id: ""
	I1128 00:48:39.168851   46126 logs.go:284] 0 containers: []
	W1128 00:48:39.168862   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:39.168869   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:39.168924   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:39.213834   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.213859   46126 cri.go:89] found id: ""
	I1128 00:48:39.213869   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:39.213914   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.218746   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:39.218772   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:39.232098   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:39.232127   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:39.373641   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:39.373674   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:39.451311   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:39.451349   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.498219   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:39.498247   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:39.952276   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:39.952314   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:40.008385   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:40.008425   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:40.052409   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:40.052443   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:40.092943   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:40.092978   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:40.135490   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:40.135520   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:40.189756   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:40.189793   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:40.242615   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:40.242643   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:37.415898   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:39.910954   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:42.802428   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:42.818606   46126 api_server.go:72] duration metric: took 4m14.508070703s to wait for apiserver process to appear ...
	I1128 00:48:42.818632   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:42.818667   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:42.818721   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:42.872566   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:42.872603   46126 cri.go:89] found id: ""
	I1128 00:48:42.872613   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:42.872675   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.878165   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:42.878232   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:42.924667   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:42.924689   46126 cri.go:89] found id: ""
	I1128 00:48:42.924699   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:42.924772   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.929748   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:42.929809   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:42.977787   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:42.977815   46126 cri.go:89] found id: ""
	I1128 00:48:42.977825   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:42.977887   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.982991   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:42.983071   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:43.032835   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.032866   46126 cri.go:89] found id: ""
	I1128 00:48:43.032876   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:43.032933   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.038635   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:43.038711   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:43.084051   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.084080   46126 cri.go:89] found id: ""
	I1128 00:48:43.084090   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:43.084161   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.088908   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:43.088976   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:43.130640   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.130666   46126 cri.go:89] found id: ""
	I1128 00:48:43.130676   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:43.130738   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.135354   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:43.135434   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:43.179655   46126 cri.go:89] found id: ""
	I1128 00:48:43.179690   46126 logs.go:284] 0 containers: []
	W1128 00:48:43.179699   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:43.179705   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:43.179770   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:43.228309   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.228335   46126 cri.go:89] found id: ""
	I1128 00:48:43.228343   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:43.228404   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.233343   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:43.233375   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:43.247396   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:43.247430   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:43.386131   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:43.386181   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:43.463228   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:43.463275   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.519469   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:43.519511   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.581402   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:43.581437   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:43.641804   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:43.641844   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:43.707768   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:43.707807   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:43.779636   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:43.779673   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:43.822939   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:43.822972   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.869304   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:43.869344   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.917500   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:43.917528   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:46.886551   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:48:46.892696   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:48:46.894400   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:46.894424   46126 api_server.go:131] duration metric: took 4.075784232s to wait for apiserver health ...
	I1128 00:48:46.894433   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:46.894455   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:46.894492   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:46.939259   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:46.939291   46126 cri.go:89] found id: ""
	I1128 00:48:46.939302   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:46.939364   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.946934   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:46.947012   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:46.989896   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:46.989920   46126 cri.go:89] found id: ""
	I1128 00:48:46.989930   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:46.989988   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.994923   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:46.994994   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:47.040298   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.040330   46126 cri.go:89] found id: ""
	I1128 00:48:47.040339   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:47.040396   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.045041   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:47.045113   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:47.093254   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.093282   46126 cri.go:89] found id: ""
	I1128 00:48:47.093290   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:47.093345   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.097856   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:47.097916   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:47.150763   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.150790   46126 cri.go:89] found id: ""
	I1128 00:48:47.150800   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:47.150855   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.155272   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:47.155348   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:47.203549   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.203586   46126 cri.go:89] found id: ""
	I1128 00:48:47.203600   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:47.203670   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.209313   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:47.209384   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:42.410241   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:44.909607   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:46.893894   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.438515297s)
	I1128 00:48:46.893965   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:46.909967   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:46.919457   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:46.928580   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:46.928629   45815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:46.989655   45815 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 00:48:46.989772   45815 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:47.162717   45815 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:47.162868   45815 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:47.163002   45815 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:47.453392   45815 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:47.455125   45815 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:47.455291   45815 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:47.455388   45815 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:47.455530   45815 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:47.455605   45815 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:47.456116   45815 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:47.456786   45815 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:47.457320   45815 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:47.457814   45815 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:47.458228   45815 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:47.458584   45815 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:47.458984   45815 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:47.459080   45815 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:47.654823   45815 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:47.858053   45815 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 00:48:48.006981   45815 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:48.256244   45815 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:48.381440   45815 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:48.381976   45815 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:48.384696   45815 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:48.386824   45815 out.go:204]   - Booting up control plane ...
	I1128 00:48:48.386943   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:48.387057   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:48.387155   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:48.404036   45815 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:48.408139   45815 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:48.408584   45815 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:48.539731   45815 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:47.259312   46126 cri.go:89] found id: ""
	I1128 00:48:47.259343   46126 logs.go:284] 0 containers: []
	W1128 00:48:47.259353   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:47.259361   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:47.259421   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:47.308650   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.308681   46126 cri.go:89] found id: ""
	I1128 00:48:47.308692   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:47.308764   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.313702   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:47.313727   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:47.327753   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:47.327788   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:47.490493   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:47.490525   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:47.554064   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:47.554097   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.604401   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:47.604433   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.643173   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:47.643211   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.707400   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:47.707432   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.763831   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:47.763860   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:47.817244   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:47.817278   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:47.872462   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:47.872499   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:47.930695   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:47.930729   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.987718   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:47.987748   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:50.856470   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:48:50.856510   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.856518   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.856525   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.856533   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.856539   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.856545   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.856558   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.856571   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.856579   46126 system_pods.go:74] duration metric: took 3.962140088s to wait for pod list to return data ...
	I1128 00:48:50.856589   46126 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:50.859308   46126 default_sa.go:45] found service account: "default"
	I1128 00:48:50.859338   46126 default_sa.go:55] duration metric: took 2.741136ms for default service account to be created ...
	I1128 00:48:50.859347   46126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:50.865347   46126 system_pods.go:86] 8 kube-system pods found
	I1128 00:48:50.865371   46126 system_pods.go:89] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.865377   46126 system_pods.go:89] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.865382   46126 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.865387   46126 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.865391   46126 system_pods.go:89] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.865395   46126 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.865405   46126 system_pods.go:89] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.865413   46126 system_pods.go:89] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.865425   46126 system_pods.go:126] duration metric: took 6.071837ms to wait for k8s-apps to be running ...
	I1128 00:48:50.865441   46126 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:50.865490   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:50.882729   46126 system_svc.go:56] duration metric: took 17.277766ms WaitForService to wait for kubelet.
	I1128 00:48:50.882767   46126 kubeadm.go:581] duration metric: took 4m22.572235871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:50.882796   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:50.886638   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:50.886671   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:50.886684   46126 node_conditions.go:105] duration metric: took 3.881703ms to run NodePressure ...
	I1128 00:48:50.886699   46126 start.go:228] waiting for startup goroutines ...
	I1128 00:48:50.886712   46126 start.go:233] waiting for cluster config update ...
	I1128 00:48:50.886725   46126 start.go:242] writing updated cluster config ...
	I1128 00:48:50.886995   46126 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:50.947562   46126 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:50.949119   46126 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-488423" cluster and "default" namespace by default
	I1128 00:48:47.419653   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:49.909410   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:51.909739   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:53.910387   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.408786   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.542000   45815 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002009 seconds
	I1128 00:48:56.567203   45815 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:56.583239   45815 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:57.114661   45815 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:57.114917   45815 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-473615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:57.633030   45815 kubeadm.go:322] [bootstrap-token] Using token: vz7ey4.v2qfoncp2ok7nh54
	I1128 00:48:57.634835   45815 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:57.634961   45815 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:57.640535   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:57.653911   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:57.658740   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:57.662927   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:57.667238   45815 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:57.688281   45815 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:57.949630   45815 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:58.055744   45815 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:58.057024   45815 kubeadm.go:322] 
	I1128 00:48:58.057159   45815 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:58.057179   45815 kubeadm.go:322] 
	I1128 00:48:58.057290   45815 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:58.057310   45815 kubeadm.go:322] 
	I1128 00:48:58.057343   45815 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:58.057431   45815 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:58.057518   45815 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:58.057536   45815 kubeadm.go:322] 
	I1128 00:48:58.057601   45815 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:58.057609   45815 kubeadm.go:322] 
	I1128 00:48:58.057673   45815 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:58.057678   45815 kubeadm.go:322] 
	I1128 00:48:58.057719   45815 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:58.057787   45815 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:58.057841   45815 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:58.057844   45815 kubeadm.go:322] 
	I1128 00:48:58.057921   45815 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:58.057987   45815 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:58.057991   45815 kubeadm.go:322] 
	I1128 00:48:58.058062   45815 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058148   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:58.058183   45815 kubeadm.go:322] 	--control-plane 
	I1128 00:48:58.058198   45815 kubeadm.go:322] 
	I1128 00:48:58.058266   45815 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:58.058272   45815 kubeadm.go:322] 
	I1128 00:48:58.058347   45815 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058449   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:58.059375   45815 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:58.059404   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:48:58.059415   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:58.061524   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:58.062981   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:58.121061   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:58.143978   45815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:58.144060   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.144068   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=no-preload-473615 minikube.k8s.io/updated_at=2023_11_28T00_48_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.495592   45815 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:58.495756   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.590073   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.412254   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:00.912329   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:59.189174   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:59.688440   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.189285   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.688724   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.189197   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.688512   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.189219   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.689235   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.189405   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.689243   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.414190   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:05.909164   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:04.188645   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:04.688928   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.189330   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.689126   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.189257   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.688476   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.189386   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.689051   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.188961   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.689080   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.188591   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.688502   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.188492   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.303728   45815 kubeadm.go:1081] duration metric: took 12.159747313s to wait for elevateKubeSystemPrivileges.
	I1128 00:49:10.303773   45815 kubeadm.go:406] StartCluster complete in 5m13.413969558s
	I1128 00:49:10.303794   45815 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.303880   45815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:49:10.306274   45815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.306559   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:49:10.306678   45815 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:49:10.306764   45815 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473615"
	I1128 00:49:10.306786   45815 addons.go:231] Setting addon storage-provisioner=true in "no-preload-473615"
	W1128 00:49:10.306799   45815 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:49:10.306822   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:49:10.306844   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.306903   45815 addons.go:69] Setting default-storageclass=true in profile "no-preload-473615"
	I1128 00:49:10.306924   45815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473615"
	I1128 00:49:10.307065   45815 addons.go:69] Setting metrics-server=true in profile "no-preload-473615"
	I1128 00:49:10.307089   45815 addons.go:231] Setting addon metrics-server=true in "no-preload-473615"
	W1128 00:49:10.307097   45815 addons.go:240] addon metrics-server should already be in state true
	I1128 00:49:10.307140   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.307283   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307284   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307366   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307313   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307600   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307650   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.323788   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1128 00:49:10.324333   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.324915   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.324940   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.325212   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I1128 00:49:10.325655   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.325825   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326138   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.326156   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.326346   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326375   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.326504   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326968   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326991   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.330263   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I1128 00:49:10.331124   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.331538   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.331559   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.331951   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.332131   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.335360   45815 addons.go:231] Setting addon default-storageclass=true in "no-preload-473615"
	W1128 00:49:10.335378   45815 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:49:10.335405   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.335685   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.335715   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.346750   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1128 00:49:10.346822   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I1128 00:49:10.347279   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347400   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347703   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347731   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347906   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347919   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347983   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348096   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.348232   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348429   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.350025   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.352544   45815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:49:10.350506   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.355541   45815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:49:10.354491   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:49:10.356963   45815 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.356980   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:49:10.356993   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.355570   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:49:10.357068   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.356139   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1128 00:49:10.356295   45815 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473615" context rescaled to 1 replicas
	I1128 00:49:10.357149   45815 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:49:10.358543   45815 out.go:177] * Verifying Kubernetes components...
	I1128 00:49:10.359926   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:10.357719   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.360555   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.360575   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.361020   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.361318   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361551   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.361574   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361736   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.361938   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.362037   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362129   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.362295   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.362317   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.362381   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.362676   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.362699   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362961   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.363188   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.363360   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.363499   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.381194   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1128 00:49:10.381543   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.382012   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.382032   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.382399   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.382584   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.384269   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.384500   45815 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.384513   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:49:10.384527   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.387448   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388000   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.388027   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388169   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.388335   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.388477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.388578   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.513157   45815 node_ready.go:35] waiting up to 6m0s for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.513251   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:49:10.546158   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.566225   45815 node_ready.go:49] node "no-preload-473615" has status "Ready":"True"
	I1128 00:49:10.566248   45815 node_ready.go:38] duration metric: took 53.063342ms waiting for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.566259   45815 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:10.589374   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:49:10.589400   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:49:10.608085   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.657717   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:49:10.657746   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:49:10.693300   45815 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.745796   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.745821   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:49:10.820139   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.848411   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:10.848444   45815 pod_ready.go:81] duration metric: took 155.116855ms waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.848459   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035904   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.035929   45815 pod_ready.go:81] duration metric: took 187.461745ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035941   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.269000   45815 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1128 00:49:11.634167   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.087967346s)
	I1128 00:49:11.634213   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026096699s)
	I1128 00:49:11.634226   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634239   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634250   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634272   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634578   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634621   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634637   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634639   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634649   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634650   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634656   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634660   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634595   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634942   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634958   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634986   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635009   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634989   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635049   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.657473   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.657495   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.657814   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.657828   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.758491   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.758514   45815 pod_ready.go:81] duration metric: took 722.565796ms waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.758525   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:12.084449   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.264259029s)
	I1128 00:49:12.084510   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084524   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.084846   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.084865   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.084875   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084870   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.084885   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.085142   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.085152   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.085164   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.085174   45815 addons.go:467] Verifying addon metrics-server=true in "no-preload-473615"
	I1128 00:49:12.087081   45815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:49:08.409321   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:10.909836   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:12.088572   45815 addons.go:502] enable addons completed in 1.781896775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:49:13.830651   45815 pod_ready.go:102] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:14.830780   45815 pod_ready.go:92] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.830805   45815 pod_ready.go:81] duration metric: took 3.072274458s waiting for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.830815   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836248   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.836266   45815 pod_ready.go:81] duration metric: took 5.444378ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836273   45815 pod_ready.go:38] duration metric: took 4.270002588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:14.836288   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:49:14.836329   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:49:14.860322   45815 api_server.go:72] duration metric: took 4.503144983s to wait for apiserver process to appear ...
	I1128 00:49:14.860354   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:49:14.860375   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:49:14.866977   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:49:14.868294   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:49:14.868318   45815 api_server.go:131] duration metric: took 7.955565ms to wait for apiserver health ...
	I1128 00:49:14.868328   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:49:14.875943   45815 system_pods.go:59] 8 kube-system pods found
	I1128 00:49:14.875972   45815 system_pods.go:61] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:14.875979   45815 system_pods.go:61] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:14.875986   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:14.875993   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:14.875999   45815 system_pods.go:61] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:14.876005   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:14.876019   45815 system_pods.go:61] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:14.876031   45815 system_pods.go:61] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:14.876042   45815 system_pods.go:74] duration metric: took 7.70749ms to wait for pod list to return data ...
	I1128 00:49:14.876058   45815 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:49:14.918080   45815 default_sa.go:45] found service account: "default"
	I1128 00:49:14.918107   45815 default_sa.go:55] duration metric: took 42.036279ms for default service account to be created ...
	I1128 00:49:14.918119   45815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:49:15.120338   45815 system_pods.go:86] 8 kube-system pods found
	I1128 00:49:15.120368   45815 system_pods.go:89] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:15.120376   45815 system_pods.go:89] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:15.120383   45815 system_pods.go:89] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:15.120390   45815 system_pods.go:89] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:15.120395   45815 system_pods.go:89] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:15.120401   45815 system_pods.go:89] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:15.120413   45815 system_pods.go:89] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:15.120420   45815 system_pods.go:89] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:15.120437   45815 system_pods.go:126] duration metric: took 202.310611ms to wait for k8s-apps to be running ...
	I1128 00:49:15.120452   45815 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:49:15.120501   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:15.134858   45815 system_svc.go:56] duration metric: took 14.396652ms WaitForService to wait for kubelet.
	I1128 00:49:15.134886   45815 kubeadm.go:581] duration metric: took 4.777716544s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:49:15.134902   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:49:15.318344   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:49:15.318370   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:49:15.318380   45815 node_conditions.go:105] duration metric: took 183.473974ms to run NodePressure ...
	I1128 00:49:15.318390   45815 start.go:228] waiting for startup goroutines ...
	I1128 00:49:15.318396   45815 start.go:233] waiting for cluster config update ...
	I1128 00:49:15.318405   45815 start.go:242] writing updated cluster config ...
	I1128 00:49:15.318651   45815 ssh_runner.go:195] Run: rm -f paused
	I1128 00:49:15.368036   45815 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 00:49:15.369853   45815 out.go:177] * Done! kubectl is now configured to use "no-preload-473615" cluster and "default" namespace by default
	I1128 00:49:12.909910   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:15.420062   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:17.421038   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:19.909444   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:21.910293   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:24.412962   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:26.908733   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:28.910353   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:31.104114   45269 pod_ready.go:81] duration metric: took 4m0.000750315s waiting for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	E1128 00:49:31.104164   45269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:49:31.104219   45269 pod_ready.go:38] duration metric: took 4m1.201800344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:31.104258   45269 kubeadm.go:640] restartCluster took 5m3.38216869s
	W1128 00:49:31.104338   45269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:49:31.104371   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:49:35.883236   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.778829992s)
	I1128 00:49:35.883312   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:35.898846   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:49:35.910716   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:49:35.921838   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:49:35.921883   45269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 00:49:35.987683   45269 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 00:49:35.987889   45269 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:49:36.153771   45269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:49:36.153926   45269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:49:36.154056   45269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:49:36.387112   45269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:49:36.387236   45269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:49:36.394929   45269 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 00:49:36.523951   45269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:49:36.526180   45269 out.go:204]   - Generating certificates and keys ...
	I1128 00:49:36.526284   45269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:49:36.526378   45269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:49:36.526508   45269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:49:36.526603   45269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:49:36.526723   45269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:49:36.526807   45269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:49:36.526928   45269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:49:36.527026   45269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:49:36.527127   45269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:49:36.527671   45269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:49:36.527734   45269 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:49:36.527807   45269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:49:36.966756   45269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:49:37.138717   45269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:49:37.307916   45269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:49:37.374115   45269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:49:37.375393   45269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:49:37.377224   45269 out.go:204]   - Booting up control plane ...
	I1128 00:49:37.377338   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:49:37.381887   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:49:37.383114   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:49:37.384032   45269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:49:37.387460   45269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:49:47.893342   45269 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504508 seconds
	I1128 00:49:47.893497   45269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:49:47.911409   45269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:49:48.437988   45269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:49:48.438226   45269 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-732472 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 00:49:48.947631   45269 kubeadm.go:322] [bootstrap-token] Using token: g2kx2b.r3qu6fui94rrmu2m
	I1128 00:49:48.949581   45269 out.go:204]   - Configuring RBAC rules ...
	I1128 00:49:48.949746   45269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:49:48.960004   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:49:48.969068   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:49:48.973998   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:49:48.982331   45269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:49:49.099721   45269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:49:49.367382   45269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:49:49.369069   45269 kubeadm.go:322] 
	I1128 00:49:49.369159   45269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:49:49.369196   45269 kubeadm.go:322] 
	I1128 00:49:49.369325   45269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:49:49.369339   45269 kubeadm.go:322] 
	I1128 00:49:49.369383   45269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:49:49.369449   45269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:49:49.369519   45269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:49:49.369541   45269 kubeadm.go:322] 
	I1128 00:49:49.369619   45269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:49:49.369725   45269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:49:49.369822   45269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:49:49.369839   45269 kubeadm.go:322] 
	I1128 00:49:49.369975   45269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 00:49:49.370080   45269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:49:49.370092   45269 kubeadm.go:322] 
	I1128 00:49:49.370202   45269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370371   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:49:49.370419   45269 kubeadm.go:322]     --control-plane 	  
	I1128 00:49:49.370432   45269 kubeadm.go:322] 
	I1128 00:49:49.370515   45269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:49:49.370527   45269 kubeadm.go:322] 
	I1128 00:49:49.370639   45269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370783   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:49:49.371106   45269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:49:49.371134   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:49:49.371148   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:49:49.373008   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:49:49.374371   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:49:49.384861   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:49:49.402517   45269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:49:49.402582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.402598   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=old-k8s-version-732472 minikube.k8s.io/updated_at=2023_11_28T00_49_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.441523   45269 ops.go:34] apiserver oom_adj: -16
	I1128 00:49:49.674343   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.796920   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.420537   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.920042   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.420533   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.920538   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.420730   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.920078   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.420670   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.920876   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.420798   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.920702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.420180   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.920033   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.420702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.920106   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.420244   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.920637   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.420226   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.920874   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.420228   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.920070   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.420845   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.920883   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.420977   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.920275   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.420097   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.920582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.420001   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.919906   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.420071   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.580992   45269 kubeadm.go:1081] duration metric: took 15.178468662s to wait for elevateKubeSystemPrivileges.
	I1128 00:50:04.581023   45269 kubeadm.go:406] StartCluster complete in 5m36.912120738s
	I1128 00:50:04.581042   45269 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.581125   45269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:50:04.582704   45269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.582966   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:50:04.583000   45269 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:50:04.583077   45269 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583105   45269 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-732472"
	W1128 00:50:04.583116   45269 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:50:04.583192   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583206   45269 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583227   45269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-732472"
	I1128 00:50:04.583540   45269 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583565   45269 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-732472"
	W1128 00:50:04.583573   45269 addons.go:240] addon metrics-server should already be in state true
	I1128 00:50:04.583609   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583635   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583640   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583676   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583643   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583193   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:50:04.584015   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.584069   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.602419   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I1128 00:50:04.602558   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I1128 00:50:04.602646   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I1128 00:50:04.603020   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603118   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603196   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603571   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603572   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603597   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603611   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603729   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603753   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603939   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.603973   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604086   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.604489   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604521   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.604617   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604646   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.608900   45269 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-732472"
	W1128 00:50:04.608925   45269 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:50:04.608953   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.611555   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.611628   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.622409   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33595
	I1128 00:50:04.622446   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1128 00:50:04.622876   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623000   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623394   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623424   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623534   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623567   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623886   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624365   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624368   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.624556   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.626412   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.626443   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.629006   45269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:50:04.630723   45269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:50:04.632378   45269 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.632395   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:50:04.632409   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.630641   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:50:04.632467   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:50:04.632479   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.632126   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I1128 00:50:04.633062   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.633666   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.633692   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.634447   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.635020   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.635053   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.636332   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636387   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636733   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636772   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636795   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636830   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636952   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637085   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637132   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637245   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637296   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637413   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637448   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.637594   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.651941   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1128 00:50:04.652604   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.653192   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.653222   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.653677   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.653838   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.655532   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.655848   45269 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.655868   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:50:04.655890   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.658852   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659252   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.659280   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659426   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.659602   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.659971   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.660096   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	W1128 00:50:04.792826   45269 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-732472" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1128 00:50:04.792863   45269 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1128 00:50:04.792890   45269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:50:04.795799   45269 out.go:177] * Verifying Kubernetes components...
	I1128 00:50:04.797469   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:50:04.870889   45269 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.871024   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:50:04.888333   45269 node_ready.go:49] node "old-k8s-version-732472" has status "Ready":"True"
	I1128 00:50:04.888359   45269 node_ready.go:38] duration metric: took 17.44205ms waiting for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.888372   45269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:04.899414   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.902681   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:04.904708   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:50:04.904734   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:50:04.947930   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.977094   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:50:04.977123   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:50:05.195712   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:05.195795   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:50:05.292058   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:06.383144   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.512083846s)
	I1128 00:50:06.383170   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.483727542s)
	I1128 00:50:06.383180   45269 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 00:50:06.383208   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383572   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383599   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383608   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383606   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.383618   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383835   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383851   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383870   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.423407   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.423447   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.423758   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.423783   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.423799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.678261   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.73029562s)
	I1128 00:50:06.678312   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678326   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678640   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678655   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.678663   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678672   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678927   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678955   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762082   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46997729s)
	I1128 00:50:06.762140   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762160   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762538   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762557   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762569   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762579   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762599   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.762815   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762830   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762840   45269 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-732472"
	I1128 00:50:06.765825   45269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:50:06.767637   45269 addons.go:502] enable addons completed in 2.184637132s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:50:06.959495   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:08.961160   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:11.459984   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:12.959294   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.959317   45269 pod_ready.go:81] duration metric: took 8.056612005s waiting for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.959326   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973244   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.973268   45269 pod_ready.go:81] duration metric: took 13.936307ms waiting for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973278   45269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980471   45269 pod_ready.go:92] pod "kube-proxy-88chq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.980489   45269 pod_ready.go:81] duration metric: took 7.20414ms waiting for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980496   45269 pod_ready.go:38] duration metric: took 8.092113593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:12.980511   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:50:12.980554   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:50:12.996604   45269 api_server.go:72] duration metric: took 8.203675443s to wait for apiserver process to appear ...
	I1128 00:50:12.996645   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:50:12.996670   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:50:13.006987   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:50:13.007986   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:50:13.008003   45269 api_server.go:131] duration metric: took 11.352257ms to wait for apiserver health ...
	I1128 00:50:13.008010   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:50:13.013658   45269 system_pods.go:59] 5 kube-system pods found
	I1128 00:50:13.013677   45269 system_pods.go:61] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.013682   45269 system_pods.go:61] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.013686   45269 system_pods.go:61] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.013693   45269 system_pods.go:61] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.013697   45269 system_pods.go:61] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.013703   45269 system_pods.go:74] duration metric: took 5.688575ms to wait for pod list to return data ...
	I1128 00:50:13.013710   45269 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:50:13.016210   45269 default_sa.go:45] found service account: "default"
	I1128 00:50:13.016228   45269 default_sa.go:55] duration metric: took 2.513069ms for default service account to be created ...
	I1128 00:50:13.016234   45269 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:50:13.020464   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.020488   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.020496   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.020502   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.020513   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.020522   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.020544   45269 retry.go:31] will retry after 244.092512ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.270858   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.270893   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.270901   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.270907   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.270918   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.270926   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.270946   45269 retry.go:31] will retry after 311.602199ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.588013   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.588041   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.588047   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.588051   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.588057   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.588062   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.588076   45269 retry.go:31] will retry after 298.08088ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.891272   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.891302   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.891307   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.891311   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.891318   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.891323   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.891339   45269 retry.go:31] will retry after 474.390305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:14.371201   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:14.371230   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:14.371236   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:14.371241   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:14.371248   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:14.371253   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:14.371269   45269 retry.go:31] will retry after 719.510586ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.096817   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.096846   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.096851   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.096855   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.096862   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.096866   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.096881   45269 retry.go:31] will retry after 684.457384ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.786918   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.786947   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.786952   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.786956   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.786962   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.786967   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.786982   45269 retry.go:31] will retry after 721.543291ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:16.513230   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:16.513258   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:16.513263   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:16.513268   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:16.513275   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:16.513280   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:16.513296   45269 retry.go:31] will retry after 1.405502561s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:17.926572   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:17.926610   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:17.926619   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:17.926626   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:17.926636   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:17.926642   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:17.926662   45269 retry.go:31] will retry after 1.65088536s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:19.584099   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:19.584130   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:19.584136   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:19.584140   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:19.584147   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:19.584152   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:19.584168   45269 retry.go:31] will retry after 1.660488369s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:21.250659   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:21.250706   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:21.250714   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:21.250719   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:21.250729   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:21.250736   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:21.250757   45269 retry.go:31] will retry after 1.762203818s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:23.018771   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:23.018798   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:23.018804   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:23.018808   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:23.018815   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:23.018819   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:23.018837   45269 retry.go:31] will retry after 2.558255345s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:25.584363   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:25.584394   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:25.584402   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:25.584409   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:25.584417   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:25.584422   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:25.584446   45269 retry.go:31] will retry after 4.457632402s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:30.049343   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:30.049374   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:30.049381   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:30.049388   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:30.049398   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:30.049406   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:30.049426   45269 retry.go:31] will retry after 5.077489821s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:35.133974   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:35.134001   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:35.134006   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:35.134010   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:35.134022   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:35.134029   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:35.134048   45269 retry.go:31] will retry after 5.675627515s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:40.814779   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:40.814808   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:40.814814   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:40.814818   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:40.814825   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:40.814829   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:40.814846   45269 retry.go:31] will retry after 5.701774609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:46.524426   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:46.524467   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:46.524475   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:46.524482   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:46.524492   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:46.524499   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:46.524521   45269 retry.go:31] will retry after 7.322045517s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:53.852348   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:53.852378   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:53.852387   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:53.852394   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:53.852406   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:53.852413   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:53.852442   45269 retry.go:31] will retry after 12.532542473s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:06.392828   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:06.392858   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:06.392863   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:06.392872   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Pending
	I1128 00:51:06.392876   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Pending
	I1128 00:51:06.392882   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Pending
	I1128 00:51:06.392886   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:06.392889   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Pending
	I1128 00:51:06.392897   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:06.392901   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:06.392915   45269 retry.go:31] will retry after 10.519018157s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:16.918264   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:16.918303   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:16.918311   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:16.918319   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Running
	I1128 00:51:16.918326   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Running
	I1128 00:51:16.918333   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Running
	I1128 00:51:16.918340   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:16.918346   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Running
	I1128 00:51:16.918360   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:16.918375   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:16.918386   45269 system_pods.go:126] duration metric: took 1m3.902146285s to wait for k8s-apps to be running ...
	I1128 00:51:16.918398   45269 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:51:16.918445   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:51:16.937522   45269 system_svc.go:56] duration metric: took 19.116204ms WaitForService to wait for kubelet.
	I1128 00:51:16.937556   45269 kubeadm.go:581] duration metric: took 1m12.144633009s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:51:16.937577   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:51:16.941812   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:51:16.941838   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:51:16.941849   45269 node_conditions.go:105] duration metric: took 4.264769ms to run NodePressure ...
	I1128 00:51:16.941859   45269 start.go:228] waiting for startup goroutines ...
	I1128 00:51:16.941865   45269 start.go:233] waiting for cluster config update ...
	I1128 00:51:16.941874   45269 start.go:242] writing updated cluster config ...
	I1128 00:51:16.942150   45269 ssh_runner.go:195] Run: rm -f paused
	I1128 00:51:16.992567   45269 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 00:51:16.994677   45269 out.go:177] 
	W1128 00:51:16.996083   45269 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 00:51:16.997442   45269 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 00:51:16.998644   45269 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-732472" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:43:49 UTC, ends at Tue 2023-11-28 00:57:52 UTC. --
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.659615915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133072659601469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2b2a94b5-d83a-4815-805c-e509420dbbe3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.660147621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f995ddee-2c40-4e16-927a-fcf677dbedbc name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.660193962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f995ddee-2c40-4e16-927a-fcf677dbedbc name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.660384454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00cd4d8553882711c7182818593a636c69b27b4ea9eac918d5f368c1b97a24a8,PodSandboxId:696e3b6bab7a17aa752ff819247d0c210d64838951c2501a0af365a7149040d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701132275036064898,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d5410e-5ec3-42c3-a64c-9d6034cc2479,},Annotations:map[string]string{io.kubernetes.container.hash: ed439777,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b,PodSandboxId:db173541c1f693c36f882b61c72100092aae1f213165cfc39180293beaf46f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132272143130271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n7qpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d027f799-6ced-488e-a4f7-6df351193c64,},Annotations:map[string]string{io.kubernetes.container.hash: 4bcc6111,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc,PodSandboxId:7767398bc7e78b85ca7deaa9630eacc36762317b3c7c37a9efcaee3340cddeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132266343876864,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f1e6e7d1-86aa-403c-b753-2b94beb7d7b1,},Annotations:map[string]string{io.kubernetes.container.hash: db1f1b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55,PodSandboxId:9c7ea99fb0fcc9748e79f1d7f62b930f78ab4a1ecc7542817b70d7de0cbb79fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132264247661359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2sfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d92ac1f-4070-4000-9bc6-3d277e0c8c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 49ba80c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193,PodSandboxId:5994df2943032a79d68e673229bf154c7b911bf283ada2e1e3e144bfdf34b0ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132257298128614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 881a15d8e5113e1b5a7cb1c587b7f2ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c,PodSandboxId:04d169305b4e28a6765e10820c14fec3f76d3c01542f0166d6479065f913685a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132257076988000,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8a335211a74893a9e0a2fbb3b79b67,},
Annotations:map[string]string{io.kubernetes.container.hash: 38744c33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64,PodSandboxId:c4199db689e7ed22a8fe0bfa3bfdfeeb3cfff7df3a398b800d8cfb8e9dceec64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132256720175268,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e59a0e6ef976f3f43fd190f644b8b03a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6,PodSandboxId:6b016cf4659b87726f157a5742ce570c6af007503fae7476c146a8c175d94eb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132256527580043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
64434154c26454bb0c93b2b163c531da,},Annotations:map[string]string{io.kubernetes.container.hash: 94ce39d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f995ddee-2c40-4e16-927a-fcf677dbedbc name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.699104544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1648cf94-8176-44b4-8c51-b62d373801f9 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.699176467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1648cf94-8176-44b4-8c51-b62d373801f9 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.700530581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=913e8df8-8ace-4da9-8d18-f9ec9e58a9d1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.700893685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133072700881790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=913e8df8-8ace-4da9-8d18-f9ec9e58a9d1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.701985295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3ab449d6-7e30-46aa-a2dd-0cdab0e2edb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.702085520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3ab449d6-7e30-46aa-a2dd-0cdab0e2edb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.702353227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00cd4d8553882711c7182818593a636c69b27b4ea9eac918d5f368c1b97a24a8,PodSandboxId:696e3b6bab7a17aa752ff819247d0c210d64838951c2501a0af365a7149040d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701132275036064898,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d5410e-5ec3-42c3-a64c-9d6034cc2479,},Annotations:map[string]string{io.kubernetes.container.hash: ed439777,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b,PodSandboxId:db173541c1f693c36f882b61c72100092aae1f213165cfc39180293beaf46f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132272143130271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n7qpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d027f799-6ced-488e-a4f7-6df351193c64,},Annotations:map[string]string{io.kubernetes.container.hash: 4bcc6111,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc,PodSandboxId:7767398bc7e78b85ca7deaa9630eacc36762317b3c7c37a9efcaee3340cddeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132266343876864,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f1e6e7d1-86aa-403c-b753-2b94beb7d7b1,},Annotations:map[string]string{io.kubernetes.container.hash: db1f1b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55,PodSandboxId:9c7ea99fb0fcc9748e79f1d7f62b930f78ab4a1ecc7542817b70d7de0cbb79fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132264247661359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2sfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d92ac1f-4070-4000-9bc6-3d277e0c8c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 49ba80c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193,PodSandboxId:5994df2943032a79d68e673229bf154c7b911bf283ada2e1e3e144bfdf34b0ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132257298128614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 881a15d8e5113e1b5a7cb1c587b7f2ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c,PodSandboxId:04d169305b4e28a6765e10820c14fec3f76d3c01542f0166d6479065f913685a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132257076988000,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8a335211a74893a9e0a2fbb3b79b67,},
Annotations:map[string]string{io.kubernetes.container.hash: 38744c33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64,PodSandboxId:c4199db689e7ed22a8fe0bfa3bfdfeeb3cfff7df3a398b800d8cfb8e9dceec64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132256720175268,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e59a0e6ef976f3f43fd190f644b8b03a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6,PodSandboxId:6b016cf4659b87726f157a5742ce570c6af007503fae7476c146a8c175d94eb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132256527580043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
64434154c26454bb0c93b2b163c531da,},Annotations:map[string]string{io.kubernetes.container.hash: 94ce39d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3ab449d6-7e30-46aa-a2dd-0cdab0e2edb3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.743183190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=16a13d0b-bfa0-4691-889f-b881d613cd51 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.743271169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=16a13d0b-bfa0-4691-889f-b881d613cd51 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.745182124Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=984323a3-38d4-4a12-b74a-1d83a16b1c92 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.745557722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133072745546354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=984323a3-38d4-4a12-b74a-1d83a16b1c92 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.746334575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=831e86bd-8b74-4046-86a8-25245c0b43f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.746410250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=831e86bd-8b74-4046-86a8-25245c0b43f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.746598969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00cd4d8553882711c7182818593a636c69b27b4ea9eac918d5f368c1b97a24a8,PodSandboxId:696e3b6bab7a17aa752ff819247d0c210d64838951c2501a0af365a7149040d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701132275036064898,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d5410e-5ec3-42c3-a64c-9d6034cc2479,},Annotations:map[string]string{io.kubernetes.container.hash: ed439777,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b,PodSandboxId:db173541c1f693c36f882b61c72100092aae1f213165cfc39180293beaf46f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132272143130271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n7qpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d027f799-6ced-488e-a4f7-6df351193c64,},Annotations:map[string]string{io.kubernetes.container.hash: 4bcc6111,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc,PodSandboxId:7767398bc7e78b85ca7deaa9630eacc36762317b3c7c37a9efcaee3340cddeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132266343876864,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f1e6e7d1-86aa-403c-b753-2b94beb7d7b1,},Annotations:map[string]string{io.kubernetes.container.hash: db1f1b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55,PodSandboxId:9c7ea99fb0fcc9748e79f1d7f62b930f78ab4a1ecc7542817b70d7de0cbb79fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132264247661359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2sfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d92ac1f-4070-4000-9bc6-3d277e0c8c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 49ba80c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193,PodSandboxId:5994df2943032a79d68e673229bf154c7b911bf283ada2e1e3e144bfdf34b0ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132257298128614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 881a15d8e5113e1b5a7cb1c587b7f2ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c,PodSandboxId:04d169305b4e28a6765e10820c14fec3f76d3c01542f0166d6479065f913685a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132257076988000,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8a335211a74893a9e0a2fbb3b79b67,},
Annotations:map[string]string{io.kubernetes.container.hash: 38744c33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64,PodSandboxId:c4199db689e7ed22a8fe0bfa3bfdfeeb3cfff7df3a398b800d8cfb8e9dceec64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132256720175268,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e59a0e6ef976f3f43fd190f644b8b03a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6,PodSandboxId:6b016cf4659b87726f157a5742ce570c6af007503fae7476c146a8c175d94eb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132256527580043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
64434154c26454bb0c93b2b163c531da,},Annotations:map[string]string{io.kubernetes.container.hash: 94ce39d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=831e86bd-8b74-4046-86a8-25245c0b43f9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.788208892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f4a5fd08-a145-4bcf-b68a-0df00887680b name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.788298445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f4a5fd08-a145-4bcf-b68a-0df00887680b name=/runtime.v1.RuntimeService/Version
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.789723866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=69a26dec-2984-4982-b48d-fbb06c1a943f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.790352925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133072790330434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=69a26dec-2984-4982-b48d-fbb06c1a943f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.792978196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5ee133f3-9979-41ba-8c2d-4c46598a2888 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.793147197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5ee133f3-9979-41ba-8c2d-4c46598a2888 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:57:52 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 00:57:52.793339004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00cd4d8553882711c7182818593a636c69b27b4ea9eac918d5f368c1b97a24a8,PodSandboxId:696e3b6bab7a17aa752ff819247d0c210d64838951c2501a0af365a7149040d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701132275036064898,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d5410e-5ec3-42c3-a64c-9d6034cc2479,},Annotations:map[string]string{io.kubernetes.container.hash: ed439777,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b,PodSandboxId:db173541c1f693c36f882b61c72100092aae1f213165cfc39180293beaf46f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132272143130271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n7qpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d027f799-6ced-488e-a4f7-6df351193c64,},Annotations:map[string]string{io.kubernetes.container.hash: 4bcc6111,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc,PodSandboxId:7767398bc7e78b85ca7deaa9630eacc36762317b3c7c37a9efcaee3340cddeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132266343876864,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f1e6e7d1-86aa-403c-b753-2b94beb7d7b1,},Annotations:map[string]string{io.kubernetes.container.hash: db1f1b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55,PodSandboxId:9c7ea99fb0fcc9748e79f1d7f62b930f78ab4a1ecc7542817b70d7de0cbb79fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132264247661359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2sfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d92ac1f-4070-4000-9bc6-3d277e0c8c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 49ba80c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193,PodSandboxId:5994df2943032a79d68e673229bf154c7b911bf283ada2e1e3e144bfdf34b0ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132257298128614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 881a15d8e5113e1b5a7cb1c587b7f2ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c,PodSandboxId:04d169305b4e28a6765e10820c14fec3f76d3c01542f0166d6479065f913685a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132257076988000,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8a335211a74893a9e0a2fbb3b79b67,},
Annotations:map[string]string{io.kubernetes.container.hash: 38744c33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64,PodSandboxId:c4199db689e7ed22a8fe0bfa3bfdfeeb3cfff7df3a398b800d8cfb8e9dceec64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132256720175268,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e59a0e6ef976f3f43fd190f644b8b03a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6,PodSandboxId:6b016cf4659b87726f157a5742ce570c6af007503fae7476c146a8c175d94eb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132256527580043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
64434154c26454bb0c93b2b163c531da,},Annotations:map[string]string{io.kubernetes.container.hash: 94ce39d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5ee133f3-9979-41ba-8c2d-4c46598a2888 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00cd4d8553882       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   696e3b6bab7a1       busybox
	02084fe546b60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   db173541c1f69       coredns-5dd5756b68-n7qpb
	fe8f8f443aabe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       1                   7767398bc7e78       storage-provisioner
	2d6fefc920655       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   9c7ea99fb0fcc       kube-proxy-2sfbm
	032c85dd651d9       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   5994df2943032       kube-scheduler-default-k8s-diff-port-488423
	0c0deffc33b75       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   04d169305b4e2       etcd-default-k8s-diff-port-488423
	cdf1978d16c71       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   c4199db689e7e       kube-controller-manager-default-k8s-diff-port-488423
	a108c17df3e3a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   6b016cf4659b8       kube-apiserver-default-k8s-diff-port-488423
	
	* 
	* ==> coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55794 - 32904 "HINFO IN 6344863561981079725.1139160491145542212. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023820628s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-488423
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-488423
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=default-k8s-diff-port-488423
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_37_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-488423
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 00:57:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:55:07 +0000   Tue, 28 Nov 2023 00:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:55:07 +0000   Tue, 28 Nov 2023 00:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:55:07 +0000   Tue, 28 Nov 2023 00:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:55:07 +0000   Tue, 28 Nov 2023 00:44:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.242
	  Hostname:    default-k8s-diff-port-488423
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6327e8bb62834ea9b622947f0d7df4bd
	  System UUID:                6327e8bb-6283-4ea9-b622-947f0d7df4bd
	  Boot ID:                    380a6c4b-cffa-42f5-b658-f63a7c6bc5e6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-n7qpb                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-488423                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-488423             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-488423    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-2sfbm                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-488423             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-57f55c9bc5-fk9xx                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-488423 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-488423 event: Registered Node default-k8s-diff-port-488423 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-488423 event: Registered Node default-k8s-diff-port-488423 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 00:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076626] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.622917] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.553999] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135713] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.650058] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.497498] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.105983] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.147401] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.104573] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.240805] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[Nov28 00:44] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[ +16.342613] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] <==
	* {"level":"warn","ts":"2023-11-28T00:44:26.46873Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.896699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/busybox.179ba2c7de8e5187\" ","response":"range_response_count:1 size:782"}
	{"level":"info","ts":"2023-11-28T00:44:26.468756Z","caller":"traceutil/trace.go:171","msg":"trace[247191754] range","detail":"{range_begin:/registry/events/default/busybox.179ba2c7de8e5187; range_end:; response_count:1; response_revision:528; }","duration":"279.929112ms","start":"2023-11-28T00:44:26.188821Z","end":"2023-11-28T00:44:26.46875Z","steps":["trace[247191754] 'agreement among raft nodes before linearized reading'  (duration: 279.86941ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.468965Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.830914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-11-28T00:44:26.469086Z","caller":"traceutil/trace.go:171","msg":"trace[1884552059] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:528; }","duration":"279.954702ms","start":"2023-11-28T00:44:26.189124Z","end":"2023-11-28T00:44:26.469079Z","steps":["trace[1884552059] 'agreement among raft nodes before linearized reading'  (duration: 279.747578ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.851787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.190598ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9018612268691030654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-488423.179ba2c875107079\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-488423.179ba2c875107079\" value_size:525 lease:9018612268691030648 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T00:44:26.852364Z","caller":"traceutil/trace.go:171","msg":"trace[1312279917] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"355.917131ms","start":"2023-11-28T00:44:26.496436Z","end":"2023-11-28T00:44:26.852353Z","steps":["trace[1312279917] 'process raft request'  (duration: 355.872001ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.852502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.496416Z","time spent":"356.015835ms","remote":"127.0.0.1:41820","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":744,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-2sfbm.179ba2c84a05c6cf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-2sfbm.179ba2c84a05c6cf\" value_size:664 lease:9018612268691030398 >> failure:<>"}
	{"level":"info","ts":"2023-11-28T00:44:26.85256Z","caller":"traceutil/trace.go:171","msg":"trace[1909795857] linearizableReadLoop","detail":"{readStateIndex:548; appliedIndex:546; }","duration":"375.666304ms","start":"2023-11-28T00:44:26.476885Z","end":"2023-11-28T00:44:26.852551Z","steps":["trace[1909795857] 'read index received'  (duration: 9.948199ms)","trace[1909795857] 'applied index is now lower than readState.Index'  (duration: 365.717144ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-28T00:44:26.852521Z","caller":"traceutil/trace.go:171","msg":"trace[918007302] transaction","detail":"{read_only:false; response_revision:530; number_of_response:1; }","duration":"376.522083ms","start":"2023-11-28T00:44:26.475983Z","end":"2023-11-28T00:44:26.852505Z","steps":["trace[918007302] 'process raft request'  (duration: 140.574125ms)","trace[918007302] 'compare'  (duration: 233.204124ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T00:44:26.852893Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.47596Z","time spent":"376.909884ms","remote":"127.0.0.1:41918","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":613,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-488423.179ba2c875107079\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-488423.179ba2c875107079\" value_size:525 lease:9018612268691030648 >> failure:<>"}
	{"level":"warn","ts":"2023-11-28T00:44:26.852921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.668759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:764"}
	{"level":"info","ts":"2023-11-28T00:44:26.852986Z","caller":"traceutil/trace.go:171","msg":"trace[820529966] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:531; }","duration":"375.737571ms","start":"2023-11-28T00:44:26.477238Z","end":"2023-11-28T00:44:26.852976Z","steps":["trace[820529966] 'agreement among raft nodes before linearized reading'  (duration: 375.628605ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.852737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.860499ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2023-11-28T00:44:26.853206Z","caller":"traceutil/trace.go:171","msg":"trace[798743168] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:531; }","duration":"376.336311ms","start":"2023-11-28T00:44:26.476859Z","end":"2023-11-28T00:44:26.853196Z","steps":["trace[798743168] 'agreement among raft nodes before linearized reading'  (duration: 375.784492ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.853259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.476845Z","time spent":"376.403982ms","remote":"127.0.0.1:41848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":218,"request content":"key:\"/registry/serviceaccounts/kube-system/node-controller\" "}
	{"level":"warn","ts":"2023-11-28T00:44:26.853231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.817503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2023-11-28T00:44:26.854108Z","caller":"traceutil/trace.go:171","msg":"trace[376844974] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:531; }","duration":"322.689883ms","start":"2023-11-28T00:44:26.531407Z","end":"2023-11-28T00:44:26.854097Z","steps":["trace[376844974] 'agreement among raft nodes before linearized reading'  (duration: 321.794976ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.854159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.53139Z","time spent":"322.760453ms","remote":"127.0.0.1:41840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1151,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2023-11-28T00:44:26.853263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.962252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-2sfbm\" ","response":"range_response_count:1 size:4609"}
	{"level":"info","ts":"2023-11-28T00:44:26.854444Z","caller":"traceutil/trace.go:171","msg":"trace[1571027589] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-2sfbm; range_end:; response_count:1; response_revision:531; }","duration":"371.140046ms","start":"2023-11-28T00:44:26.483296Z","end":"2023-11-28T00:44:26.854436Z","steps":["trace[1571027589] 'agreement among raft nodes before linearized reading'  (duration: 369.947171ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.854487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.483284Z","time spent":"371.194775ms","remote":"127.0.0.1:41844","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4632,"request content":"key:\"/registry/pods/kube-system/kube-proxy-2sfbm\" "}
	{"level":"warn","ts":"2023-11-28T00:44:26.853176Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.477228Z","time spent":"375.902379ms","remote":"127.0.0.1:41826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":787,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"info","ts":"2023-11-28T00:54:20.722228Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":830}
	{"level":"info","ts":"2023-11-28T00:54:20.724762Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":830,"took":"2.153641ms","hash":3084872912}
	{"level":"info","ts":"2023-11-28T00:54:20.72484Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3084872912,"revision":830,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  00:57:53 up 14 min,  0 users,  load average: 0.08, 0.16, 0.15
	Linux default-k8s-diff-port-488423 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] <==
	* I1128 00:54:22.739411       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 00:54:23.739918       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:54:23.739980       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:54:23.739988       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:54:23.740186       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:54:23.740267       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:54:23.741305       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:55:22.602153       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 00:55:23.740943       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:55:23.740971       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:55:23.740986       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:55:23.742154       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:55:23.742272       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:55:23.742304       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:56:22.601695       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1128 00:57:22.601523       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 00:57:23.741244       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:57:23.741396       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:57:23.741451       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:57:23.742562       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:57:23.742694       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:57:23.742725       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] <==
	* I1128 00:52:07.930798       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:52:37.446850       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:52:37.945826       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:53:07.454139       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:53:07.955813       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:53:37.460182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:53:37.966620       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:54:07.466375       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:54:07.975980       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:54:37.474634       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:54:37.985966       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:55:07.480398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:55:07.994814       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 00:55:26.412110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="400.47µs"
	I1128 00:55:37.411489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="130.042µs"
	E1128 00:55:37.492742       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:55:38.008374       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:56:07.498231       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:56:08.016988       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:56:37.504851       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:56:38.027660       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:57:07.510649       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:57:08.036730       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:57:37.517864       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:57:38.045308       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] <==
	* I1128 00:44:25.466336       1 server_others.go:69] "Using iptables proxy"
	I1128 00:44:26.002587       1 node.go:141] Successfully retrieved node IP: 192.168.72.242
	I1128 00:44:26.087238       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 00:44:26.087290       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 00:44:26.102784       1 server_others.go:152] "Using iptables Proxier"
	I1128 00:44:26.102851       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 00:44:26.103156       1 server.go:846] "Version info" version="v1.28.4"
	I1128 00:44:26.103173       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:44:26.104608       1 config.go:188] "Starting service config controller"
	I1128 00:44:26.104718       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 00:44:26.104740       1 config.go:97] "Starting endpoint slice config controller"
	I1128 00:44:26.104744       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 00:44:26.105989       1 config.go:315] "Starting node config controller"
	I1128 00:44:26.105998       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 00:44:26.205914       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 00:44:26.206086       1 shared_informer.go:318] Caches are synced for service config
	I1128 00:44:26.206474       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] <==
	* I1128 00:44:19.545213       1 serving.go:348] Generated self-signed cert in-memory
	W1128 00:44:22.631840       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 00:44:22.631948       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:44:22.631965       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 00:44:22.631975       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 00:44:22.721534       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1128 00:44:22.721630       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:44:22.724544       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 00:44:22.724715       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 00:44:22.726559       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1128 00:44:22.726682       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 00:44:22.825153       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:43:49 UTC, ends at Tue 2023-11-28 00:57:53 UTC. --
	Nov 28 00:55:13 default-k8s-diff-port-488423 kubelet[918]: E1128 00:55:13.405229     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:55:15 default-k8s-diff-port-488423 kubelet[918]: E1128 00:55:15.416198     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:55:15 default-k8s-diff-port-488423 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:55:15 default-k8s-diff-port-488423 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:55:15 default-k8s-diff-port-488423 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:55:26 default-k8s-diff-port-488423 kubelet[918]: E1128 00:55:26.392765     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:55:37 default-k8s-diff-port-488423 kubelet[918]: E1128 00:55:37.394207     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:55:52 default-k8s-diff-port-488423 kubelet[918]: E1128 00:55:52.392707     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:56:07 default-k8s-diff-port-488423 kubelet[918]: E1128 00:56:07.393380     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:56:15 default-k8s-diff-port-488423 kubelet[918]: E1128 00:56:15.416157     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:56:15 default-k8s-diff-port-488423 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:56:15 default-k8s-diff-port-488423 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:56:15 default-k8s-diff-port-488423 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:56:20 default-k8s-diff-port-488423 kubelet[918]: E1128 00:56:20.393122     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:56:33 default-k8s-diff-port-488423 kubelet[918]: E1128 00:56:33.393198     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:56:46 default-k8s-diff-port-488423 kubelet[918]: E1128 00:56:46.393402     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:56:59 default-k8s-diff-port-488423 kubelet[918]: E1128 00:56:59.393139     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:57:13 default-k8s-diff-port-488423 kubelet[918]: E1128 00:57:13.393947     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:57:15 default-k8s-diff-port-488423 kubelet[918]: E1128 00:57:15.416361     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:57:15 default-k8s-diff-port-488423 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:57:15 default-k8s-diff-port-488423 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:57:15 default-k8s-diff-port-488423 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:57:24 default-k8s-diff-port-488423 kubelet[918]: E1128 00:57:24.392946     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:57:37 default-k8s-diff-port-488423 kubelet[918]: E1128 00:57:37.394581     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 00:57:48 default-k8s-diff-port-488423 kubelet[918]: E1128 00:57:48.392726     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	
	* 
	* ==> storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] <==
	* I1128 00:44:26.521255       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:44:26.530266       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:44:26.530345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 00:44:44.281979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 00:44:44.282476       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-488423_bd9676ab-00e6-4be8-b688-a9333b84eabd!
	I1128 00:44:44.283307       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b5cd0a1-2266-494b-b45d-c4f4999214bf", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-488423_bd9676ab-00e6-4be8-b688-a9333b84eabd became leader
	I1128 00:44:44.383405       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-488423_bd9676ab-00e6-4be8-b688-a9333b84eabd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-488423 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fk9xx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-488423 describe pod metrics-server-57f55c9bc5-fk9xx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-488423 describe pod metrics-server-57f55c9bc5-fk9xx: exit status 1 (67.223989ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fk9xx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-488423 describe pod metrics-server-57f55c9bc5-fk9xx: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 00:50:27.680469   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-473615 -n no-preload-473615
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-28 00:58:15.95938201 +0000 UTC m=+5597.524408643
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-473615 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-473615 logs -n 25: (1.568192084s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-188325                                 | cert-options-188325          | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:33 UTC |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-732472        | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-304541            | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-001086 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | disable-driver-mounts-001086                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:37 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473615             | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC | 28 Nov 23 00:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-732472             | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-488423  | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-304541                 | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473615                  | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-488423       | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC | 28 Nov 23 00:48 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 00:40:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 00:40:42.238362   46126 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:40:42.238498   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238513   46126 out.go:309] Setting ErrFile to fd 2...
	I1128 00:40:42.238520   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238712   46126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:40:42.239236   46126 out.go:303] Setting JSON to false
	I1128 00:40:42.240138   46126 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4989,"bootTime":1701127053,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:40:42.240194   46126 start.go:138] virtualization: kvm guest
	I1128 00:40:42.242505   46126 out.go:177] * [default-k8s-diff-port-488423] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:40:42.243937   46126 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:40:42.243990   46126 notify.go:220] Checking for updates...
	I1128 00:40:42.245317   46126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:40:42.246717   46126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:40:42.248096   46126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:40:42.249294   46126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:40:42.250596   46126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:40:42.252296   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:40:42.252793   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.252854   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.267605   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I1128 00:40:42.267958   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.268457   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.268479   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.268774   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.268971   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.269215   46126 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:40:42.269470   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.269501   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.283984   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I1128 00:40:42.284338   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.284786   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.284808   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.285077   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.285263   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.319077   46126 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 00:40:42.320321   46126 start.go:298] selected driver: kvm2
	I1128 00:40:42.320332   46126 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.320481   46126 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:40:42.321242   46126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.321325   46126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 00:40:42.335477   46126 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 00:40:42.335818   46126 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 00:40:42.335887   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:40:42.335907   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:40:42.335922   46126 start_flags.go:323] config:
	{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-48842
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.336092   46126 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.337823   46126 out.go:177] * Starting control plane node default-k8s-diff-port-488423 in cluster default-k8s-diff-port-488423
	I1128 00:40:40.713025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:42.338980   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:40:42.339010   46126 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 00:40:42.339024   46126 cache.go:56] Caching tarball of preloaded images
	I1128 00:40:42.339105   46126 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 00:40:42.339117   46126 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 00:40:42.339232   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:40:42.339416   46126 start.go:365] acquiring machines lock for default-k8s-diff-port-488423: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:40:43.785024   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:49.865013   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:52.936964   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:59.017058   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:02.089017   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:08.169021   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:11.241040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:17.321032   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:20.393000   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:26.473039   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:29.544989   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:35.625074   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:38.697020   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:44.777040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:47.849040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:53.929055   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:57.001005   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:03.081016   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:06.153078   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:12.233029   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:15.305165   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:21.385067   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:24.457038   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:30.537025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:33.608998   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:39.689061   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:42.761012   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:48.841003   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:51.912985   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:54.916816   45580 start.go:369] acquired machines lock for "embed-certs-304541" in 3m47.030120592s
	I1128 00:42:54.916877   45580 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:42:54.916890   45580 fix.go:54] fixHost starting: 
	I1128 00:42:54.917233   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:42:54.917266   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:42:54.932296   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1128 00:42:54.932712   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:42:54.933230   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:42:54.933254   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:42:54.933574   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:42:54.933837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:42:54.934006   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:42:54.935712   45580 fix.go:102] recreateIfNeeded on embed-certs-304541: state=Stopped err=<nil>
	I1128 00:42:54.935737   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	W1128 00:42:54.935937   45580 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:42:54.937893   45580 out.go:177] * Restarting existing kvm2 VM for "embed-certs-304541" ...
	I1128 00:42:54.914751   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:42:54.914794   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:42:54.916666   45269 machine.go:91] provisioned docker machine in 4m37.413850055s
	I1128 00:42:54.916713   45269 fix.go:56] fixHost completed within 4m37.433506318s
	I1128 00:42:54.916719   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 4m37.433526985s
	W1128 00:42:54.916738   45269 start.go:691] error starting host: provision: host is not running
	W1128 00:42:54.916844   45269 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 00:42:54.916854   45269 start.go:706] Will try again in 5 seconds ...
	I1128 00:42:54.939120   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Start
	I1128 00:42:54.939284   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring networks are active...
	I1128 00:42:54.940122   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network default is active
	I1128 00:42:54.940636   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network mk-embed-certs-304541 is active
	I1128 00:42:54.941025   45580 main.go:141] libmachine: (embed-certs-304541) Getting domain xml...
	I1128 00:42:54.941883   45580 main.go:141] libmachine: (embed-certs-304541) Creating domain...
	I1128 00:42:56.157644   45580 main.go:141] libmachine: (embed-certs-304541) Waiting to get IP...
	I1128 00:42:56.158479   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.158803   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.158888   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.158791   46474 retry.go:31] will retry after 235.266272ms: waiting for machine to come up
	I1128 00:42:56.395238   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.395630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.395664   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.395579   46474 retry.go:31] will retry after 352.110542ms: waiting for machine to come up
	I1128 00:42:56.749150   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.749542   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.749570   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.749500   46474 retry.go:31] will retry after 364.122623ms: waiting for machine to come up
	I1128 00:42:57.115054   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.115497   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.115526   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.115450   46474 retry.go:31] will retry after 583.197763ms: waiting for machine to come up
	I1128 00:42:57.700134   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.700551   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.700577   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.700497   46474 retry.go:31] will retry after 515.615548ms: waiting for machine to come up
	I1128 00:42:59.917964   45269 start.go:365] acquiring machines lock for old-k8s-version-732472: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:42:58.218252   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.218630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.218668   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.218611   46474 retry.go:31] will retry after 690.258077ms: waiting for machine to come up
	I1128 00:42:58.910090   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.910438   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.910464   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.910413   46474 retry.go:31] will retry after 737.779074ms: waiting for machine to come up
	I1128 00:42:59.649308   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:59.649634   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:59.649661   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:59.649609   46474 retry.go:31] will retry after 1.23938471s: waiting for machine to come up
	I1128 00:43:00.890867   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:00.891318   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:00.891356   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:00.891298   46474 retry.go:31] will retry after 1.475598535s: waiting for machine to come up
	I1128 00:43:02.368630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:02.369159   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:02.369189   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:02.369085   46474 retry.go:31] will retry after 2.323321s: waiting for machine to come up
	I1128 00:43:04.695735   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:04.696175   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:04.696208   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:04.696131   46474 retry.go:31] will retry after 1.903335453s: waiting for machine to come up
	I1128 00:43:06.601229   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:06.601657   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:06.601687   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:06.601612   46474 retry.go:31] will retry after 2.205948796s: waiting for machine to come up
	I1128 00:43:08.809792   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:08.810161   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:08.810188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:08.810149   46474 retry.go:31] will retry after 3.31430253s: waiting for machine to come up
	I1128 00:43:12.126852   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:12.127294   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:12.127323   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:12.127249   46474 retry.go:31] will retry after 3.492216742s: waiting for machine to come up
	I1128 00:43:16.981905   45815 start.go:369] acquired machines lock for "no-preload-473615" in 3m38.128436656s
	I1128 00:43:16.981988   45815 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:16.982000   45815 fix.go:54] fixHost starting: 
	I1128 00:43:16.982400   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:16.982434   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:17.001935   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I1128 00:43:17.002390   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:17.002899   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:43:17.002930   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:17.003303   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:17.003515   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:17.003658   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:43:17.005243   45815 fix.go:102] recreateIfNeeded on no-preload-473615: state=Stopped err=<nil>
	I1128 00:43:17.005273   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	W1128 00:43:17.005442   45815 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:17.007831   45815 out.go:177] * Restarting existing kvm2 VM for "no-preload-473615" ...
	I1128 00:43:15.620590   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621046   45580 main.go:141] libmachine: (embed-certs-304541) Found IP for machine: 192.168.50.93
	I1128 00:43:15.621071   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has current primary IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621083   45580 main.go:141] libmachine: (embed-certs-304541) Reserving static IP address...
	I1128 00:43:15.621440   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.621473   45580 main.go:141] libmachine: (embed-certs-304541) DBG | skip adding static IP to network mk-embed-certs-304541 - found existing host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"}
	I1128 00:43:15.621484   45580 main.go:141] libmachine: (embed-certs-304541) Reserved static IP address: 192.168.50.93
	I1128 00:43:15.621498   45580 main.go:141] libmachine: (embed-certs-304541) Waiting for SSH to be available...
	I1128 00:43:15.621516   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Getting to WaitForSSH function...
	I1128 00:43:15.623594   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623865   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.623897   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623968   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH client type: external
	I1128 00:43:15.623989   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa (-rw-------)
	I1128 00:43:15.624044   45580 main.go:141] libmachine: (embed-certs-304541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:15.624057   45580 main.go:141] libmachine: (embed-certs-304541) DBG | About to run SSH command:
	I1128 00:43:15.624068   45580 main.go:141] libmachine: (embed-certs-304541) DBG | exit 0
	I1128 00:43:15.708868   45580 main.go:141] libmachine: (embed-certs-304541) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:15.709246   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetConfigRaw
	I1128 00:43:15.709989   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.712312   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712623   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.712660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712968   45580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/config.json ...
	I1128 00:43:15.713166   45580 machine.go:88] provisioning docker machine ...
	I1128 00:43:15.713183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:15.713360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713552   45580 buildroot.go:166] provisioning hostname "embed-certs-304541"
	I1128 00:43:15.713573   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713731   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.716027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716386   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.716419   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716530   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.716703   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.716856   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.717034   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.717229   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.717565   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.717579   45580 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-304541 && echo "embed-certs-304541" | sudo tee /etc/hostname
	I1128 00:43:15.841766   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-304541
	
	I1128 00:43:15.841821   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.844529   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.844872   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.844919   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.845037   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.845231   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845476   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.845616   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.845976   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.846002   45580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-304541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-304541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-304541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:15.965821   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:15.965855   45580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:15.965876   45580 buildroot.go:174] setting up certificates
	I1128 00:43:15.965890   45580 provision.go:83] configureAuth start
	I1128 00:43:15.965903   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.966183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.968916   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969285   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.969313   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969483   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.971549   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.971913   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.971949   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.972092   45580 provision.go:138] copyHostCerts
	I1128 00:43:15.972168   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:15.972182   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:15.972260   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:15.972415   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:15.972427   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:15.972472   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:15.972562   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:15.972572   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:15.972603   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:15.972663   45580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.embed-certs-304541 san=[192.168.50.93 192.168.50.93 localhost 127.0.0.1 minikube embed-certs-304541]
	I1128 00:43:16.272269   45580 provision.go:172] copyRemoteCerts
	I1128 00:43:16.272333   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:16.272354   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.274793   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275102   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.275138   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275285   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.275495   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.275628   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.275752   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.361853   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:43:16.386340   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:16.410490   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:16.433471   45580 provision.go:86] duration metric: configureAuth took 467.56808ms
	I1128 00:43:16.433505   45580 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:16.433686   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:16.433760   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.436514   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.436987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.437029   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.437129   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.437316   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437472   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437614   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.437748   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.438055   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.438072   45580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:16.732374   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:16.732407   45580 machine.go:91] provisioned docker machine in 1.019227514s
	I1128 00:43:16.732419   45580 start.go:300] post-start starting for "embed-certs-304541" (driver="kvm2")
	I1128 00:43:16.732429   45580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:16.732474   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.732847   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:16.732879   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.735564   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.735987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.736027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.736210   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.736393   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.736549   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.736714   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.824741   45580 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:16.829313   45580 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:16.829347   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:16.829426   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:16.829529   45580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:16.829642   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:16.839740   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:16.862881   45580 start.go:303] post-start completed in 130.432418ms
	I1128 00:43:16.862911   45580 fix.go:56] fixHost completed within 21.946020541s
	I1128 00:43:16.862938   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.865721   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.866144   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866336   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.866545   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866744   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866869   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.867046   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.867350   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.867359   45580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:16.981759   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132196.930241591
	
	I1128 00:43:16.981779   45580 fix.go:206] guest clock: 1701132196.930241591
	I1128 00:43:16.981786   45580 fix.go:219] Guest: 2023-11-28 00:43:16.930241591 +0000 UTC Remote: 2023-11-28 00:43:16.862915941 +0000 UTC m=+249.133993071 (delta=67.32565ms)
	I1128 00:43:16.981804   45580 fix.go:190] guest clock delta is within tolerance: 67.32565ms
	I1128 00:43:16.981809   45580 start.go:83] releasing machines lock for "embed-certs-304541", held for 22.064954687s
	I1128 00:43:16.981848   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.982121   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:16.984621   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.984927   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.984986   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.985171   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985675   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985825   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985892   45580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:16.985926   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.986025   45580 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:16.986054   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.988651   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.988839   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989079   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989367   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989411   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989451   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989491   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989544   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989648   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989692   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989781   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989860   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.989933   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:17.104567   45580 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:17.110844   45580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:17.254201   45580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:17.262078   45580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:17.262154   45580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:17.282179   45580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:17.282209   45580 start.go:472] detecting cgroup driver to use...
	I1128 00:43:17.282271   45580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:17.296891   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:17.311479   45580 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:17.311552   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:17.325946   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:17.340513   45580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:17.469601   45580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:17.605127   45580 docker.go:219] disabling docker service ...
	I1128 00:43:17.605199   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:17.621850   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:17.634608   45580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:17.753009   45580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:17.859260   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:17.872564   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:17.889701   45580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:17.889755   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.898724   45580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:17.898799   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.907565   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.916243   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.925280   45580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:17.934933   45580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:17.943902   45580 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:17.943960   45580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:17.957608   45580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:17.967379   45580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:18.074173   45580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:18.251191   45580 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:18.251264   45580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:18.259963   45580 start.go:540] Will wait 60s for crictl version
	I1128 00:43:18.260041   45580 ssh_runner.go:195] Run: which crictl
	I1128 00:43:18.263936   45580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:18.303087   45580 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:18.303181   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.344939   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.402444   45580 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:17.009281   45815 main.go:141] libmachine: (no-preload-473615) Calling .Start
	I1128 00:43:17.009442   45815 main.go:141] libmachine: (no-preload-473615) Ensuring networks are active...
	I1128 00:43:17.010161   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network default is active
	I1128 00:43:17.010485   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network mk-no-preload-473615 is active
	I1128 00:43:17.010860   45815 main.go:141] libmachine: (no-preload-473615) Getting domain xml...
	I1128 00:43:17.011780   45815 main.go:141] libmachine: (no-preload-473615) Creating domain...
	I1128 00:43:18.289916   45815 main.go:141] libmachine: (no-preload-473615) Waiting to get IP...
	I1128 00:43:18.290892   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.291348   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.291434   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.291321   46604 retry.go:31] will retry after 208.579367ms: waiting for machine to come up
	I1128 00:43:18.501947   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.502401   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.502431   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.502362   46604 retry.go:31] will retry after 296.427399ms: waiting for machine to come up
	I1128 00:43:18.403974   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:18.406811   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407171   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:18.407201   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407459   45580 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:18.411727   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:18.423460   45580 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:18.423570   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:18.463722   45580 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:18.463797   45580 ssh_runner.go:195] Run: which lz4
	I1128 00:43:18.468008   45580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:18.472523   45580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:18.472560   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:43:20.378745   45580 crio.go:444] Took 1.910818 seconds to copy over tarball
	I1128 00:43:20.378836   45580 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:18.801131   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.801707   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.801741   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.801666   46604 retry.go:31] will retry after 355.365314ms: waiting for machine to come up
	I1128 00:43:19.159088   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.159590   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.159628   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.159550   46604 retry.go:31] will retry after 584.908889ms: waiting for machine to come up
	I1128 00:43:19.746379   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.746941   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.746978   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.746901   46604 retry.go:31] will retry after 707.432097ms: waiting for machine to come up
	I1128 00:43:20.455880   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:20.456378   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:20.456402   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:20.456346   46604 retry.go:31] will retry after 598.57984ms: waiting for machine to come up
	I1128 00:43:21.056102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.056548   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.056579   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.056500   46604 retry.go:31] will retry after 742.55033ms: waiting for machine to come up
	I1128 00:43:21.800382   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.800895   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.800926   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.800841   46604 retry.go:31] will retry after 1.138217867s: waiting for machine to come up
	I1128 00:43:22.941401   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:22.941902   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:22.941932   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:22.941867   46604 retry.go:31] will retry after 1.552423219s: waiting for machine to come up
	I1128 00:43:23.310969   45580 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.932089296s)
	I1128 00:43:23.311004   45580 crio.go:451] Took 2.932228 seconds to extract the tarball
	I1128 00:43:23.311017   45580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:43:23.351844   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:23.397599   45580 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:43:23.397625   45580 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:43:23.397705   45580 ssh_runner.go:195] Run: crio config
	I1128 00:43:23.460298   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:23.460326   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:23.460348   45580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:23.460383   45580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.93 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-304541 NodeName:embed-certs-304541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:23.460547   45580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-304541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:23.460641   45580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-304541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:23.460696   45580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:43:23.470334   45580 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:23.470410   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:23.480675   45580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1128 00:43:23.497482   45580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:43:23.513709   45580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1128 00:43:23.530363   45580 ssh_runner.go:195] Run: grep 192.168.50.93	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:23.533938   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:23.546399   45580 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541 for IP: 192.168.50.93
	I1128 00:43:23.546443   45580 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:23.546632   45580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:23.546695   45580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:23.546799   45580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/client.key
	I1128 00:43:23.546892   45580 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key.9bda4d83
	I1128 00:43:23.546960   45580 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key
	I1128 00:43:23.547122   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:23.547178   45580 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:23.547196   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:23.547237   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:23.547280   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:23.547317   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:23.547392   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:23.548287   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:23.571910   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 00:43:23.597339   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:23.621977   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:43:23.648048   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:23.671213   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:23.695307   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:23.719122   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:23.743153   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:23.766469   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:23.789932   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:23.813950   45580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:23.830291   45580 ssh_runner.go:195] Run: openssl version
	I1128 00:43:23.837945   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:23.847572   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852284   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852334   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.860003   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:23.872829   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:23.886286   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.892997   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.893079   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.899923   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:23.909771   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:23.919498   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924066   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924126   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.929583   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:23.939366   45580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:23.944091   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:23.950255   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:23.956493   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:23.962278   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:23.970032   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:23.977660   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:23.984257   45580 kubeadm.go:404] StartCluster: {Name:embed-certs-304541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:23.984408   45580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:23.984471   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:24.026147   45580 cri.go:89] found id: ""
	I1128 00:43:24.026222   45580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:24.035520   45580 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:24.035550   45580 kubeadm.go:636] restartCluster start
	I1128 00:43:24.035631   45580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:24.044318   45580 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.045591   45580 kubeconfig.go:92] found "embed-certs-304541" server: "https://192.168.50.93:8443"
	I1128 00:43:24.047987   45580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:24.056482   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.056541   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.067055   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.067072   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.067108   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.076950   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.577344   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.577441   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.588707   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.077862   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.077965   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.089729   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.577938   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.578019   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.593191   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.077819   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.077891   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.091224   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.577757   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.577844   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.588769   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.077106   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.077235   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.088668   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.577169   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.577249   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.588221   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.496599   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:24.496989   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:24.497018   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:24.496943   46604 retry.go:31] will retry after 2.05343917s: waiting for machine to come up
	I1128 00:43:26.552249   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:26.552684   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:26.552716   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:26.552636   46604 retry.go:31] will retry after 2.338063311s: waiting for machine to come up
	I1128 00:43:28.077161   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.077265   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.088552   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.577077   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.577168   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.588335   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.077927   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.078027   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.089679   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.577193   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.577293   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.077430   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.077542   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.088547   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.577088   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.577203   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.077809   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.077907   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.090329   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.577897   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.577975   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.591561   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.077101   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.077206   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.087945   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.577446   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.577528   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.588542   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.893450   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:28.893812   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:28.893841   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:28.893761   46604 retry.go:31] will retry after 3.578756905s: waiting for machine to come up
	I1128 00:43:32.473719   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:32.474199   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:32.474234   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:32.474155   46604 retry.go:31] will retry after 3.070637163s: waiting for machine to come up
	I1128 00:43:36.805769   46126 start.go:369] acquired machines lock for "default-k8s-diff-port-488423" in 2m54.466321295s
	I1128 00:43:36.805830   46126 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:36.805840   46126 fix.go:54] fixHost starting: 
	I1128 00:43:36.806271   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:36.806311   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:36.825195   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I1128 00:43:36.825723   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:36.826325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:43:36.826348   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:36.826703   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:36.826932   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:36.827106   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:43:36.828683   46126 fix.go:102] recreateIfNeeded on default-k8s-diff-port-488423: state=Stopped err=<nil>
	I1128 00:43:36.828709   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	W1128 00:43:36.828895   46126 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:36.830377   46126 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-488423" ...
	I1128 00:43:36.831614   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Start
	I1128 00:43:36.831781   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring networks are active...
	I1128 00:43:36.832447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network default is active
	I1128 00:43:36.832841   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network mk-default-k8s-diff-port-488423 is active
	I1128 00:43:36.833220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Getting domain xml...
	I1128 00:43:36.833947   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Creating domain...
	I1128 00:43:33.077031   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.077109   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.088430   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:33.578007   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.578093   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.589185   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:34.056684   45580 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:43:34.056718   45580 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:43:34.056733   45580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:43:34.056836   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:34.096078   45580 cri.go:89] found id: ""
	I1128 00:43:34.096157   45580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:43:34.111200   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:43:34.119603   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:43:34.119654   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128150   45580 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128170   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.236389   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.879134   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.070594   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.159436   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.223694   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:43:35.223787   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.238511   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.753955   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.254449   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.753943   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.253987   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.753515   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.777619   45580 api_server.go:72] duration metric: took 2.553922938s to wait for apiserver process to appear ...
	I1128 00:43:37.777646   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:43:35.548294   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.548718   45815 main.go:141] libmachine: (no-preload-473615) Found IP for machine: 192.168.61.195
	I1128 00:43:35.548746   45815 main.go:141] libmachine: (no-preload-473615) Reserving static IP address...
	I1128 00:43:35.548790   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has current primary IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.549194   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.549223   45815 main.go:141] libmachine: (no-preload-473615) DBG | skip adding static IP to network mk-no-preload-473615 - found existing host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"}
	I1128 00:43:35.549238   45815 main.go:141] libmachine: (no-preload-473615) Reserved static IP address: 192.168.61.195
	I1128 00:43:35.549253   45815 main.go:141] libmachine: (no-preload-473615) Waiting for SSH to be available...
	I1128 00:43:35.549265   45815 main.go:141] libmachine: (no-preload-473615) DBG | Getting to WaitForSSH function...
	I1128 00:43:35.551246   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551573   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.551601   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551757   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH client type: external
	I1128 00:43:35.551778   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa (-rw-------)
	I1128 00:43:35.551811   45815 main.go:141] libmachine: (no-preload-473615) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:35.551831   45815 main.go:141] libmachine: (no-preload-473615) DBG | About to run SSH command:
	I1128 00:43:35.551867   45815 main.go:141] libmachine: (no-preload-473615) DBG | exit 0
	I1128 00:43:35.636291   45815 main.go:141] libmachine: (no-preload-473615) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:35.636667   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetConfigRaw
	I1128 00:43:35.637278   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.639799   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640164   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.640209   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640423   45815 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/config.json ...
	I1128 00:43:35.640598   45815 machine.go:88] provisioning docker machine ...
	I1128 00:43:35.640632   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:35.640853   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641071   45815 buildroot.go:166] provisioning hostname "no-preload-473615"
	I1128 00:43:35.641090   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641242   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.643554   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643845   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.643905   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643977   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.644140   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644370   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.644540   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.644971   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.644986   45815 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473615 && echo "no-preload-473615" | sudo tee /etc/hostname
	I1128 00:43:35.766635   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473615
	
	I1128 00:43:35.766689   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.769704   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770068   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.770108   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.770463   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770622   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770733   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.770849   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.771214   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.771235   45815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473615/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:35.889378   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:35.889416   45815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:35.889480   45815 buildroot.go:174] setting up certificates
	I1128 00:43:35.889494   45815 provision.go:83] configureAuth start
	I1128 00:43:35.889506   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.889810   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.892924   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893313   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.893359   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.895759   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896140   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.896169   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896281   45815 provision.go:138] copyHostCerts
	I1128 00:43:35.896345   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:35.896370   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:35.896448   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:35.896565   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:35.896577   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:35.896618   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:35.896713   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:35.896728   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:35.896778   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:35.896856   45815 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.no-preload-473615 san=[192.168.61.195 192.168.61.195 localhost 127.0.0.1 minikube no-preload-473615]
	I1128 00:43:36.080367   45815 provision.go:172] copyRemoteCerts
	I1128 00:43:36.080429   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:36.080451   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.082989   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083327   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.083358   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083529   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.083745   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.083927   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.084077   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.166338   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:36.191867   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:36.214184   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:36.237102   45815 provision.go:86] duration metric: configureAuth took 347.594627ms
	I1128 00:43:36.237135   45815 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:36.237338   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:43:36.237421   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.240408   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240787   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.240826   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240995   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.241193   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241368   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241539   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.241712   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.242000   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.242016   45815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:36.565582   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:36.565609   45815 machine.go:91] provisioned docker machine in 924.985826ms
	I1128 00:43:36.565623   45815 start.go:300] post-start starting for "no-preload-473615" (driver="kvm2")
	I1128 00:43:36.565649   45815 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:36.565677   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.565994   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:36.566025   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.568653   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569032   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.569064   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569148   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.569337   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.569502   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.569678   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.655695   45815 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:36.659909   45815 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:36.659941   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:36.660020   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:36.660108   45815 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:36.660228   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:36.669575   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:36.690970   45815 start.go:303] post-start completed in 125.33198ms
	I1128 00:43:36.690998   45815 fix.go:56] fixHost completed within 19.708998537s
	I1128 00:43:36.691022   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.693929   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694361   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.694400   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694665   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.694877   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695064   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695237   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.695404   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.695738   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.695750   45815 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:36.805602   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132216.779589412
	
	I1128 00:43:36.805626   45815 fix.go:206] guest clock: 1701132216.779589412
	I1128 00:43:36.805637   45815 fix.go:219] Guest: 2023-11-28 00:43:36.779589412 +0000 UTC Remote: 2023-11-28 00:43:36.691003095 +0000 UTC m=+237.986754258 (delta=88.586317ms)
	I1128 00:43:36.805673   45815 fix.go:190] guest clock delta is within tolerance: 88.586317ms
	I1128 00:43:36.805678   45815 start.go:83] releasing machines lock for "no-preload-473615", held for 19.823720426s
	I1128 00:43:36.805705   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.805989   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:36.808864   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809316   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.809346   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809529   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810162   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810361   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810441   45815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:36.810494   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.810824   45815 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:36.810845   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.813747   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.813979   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814064   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814263   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814444   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814471   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814508   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814659   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814764   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.814844   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814913   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.815484   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.815640   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.923054   45815 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:36.930078   45815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:37.082251   45815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:37.088817   45815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:37.088890   45815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:37.110921   45815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:37.110950   45815 start.go:472] detecting cgroup driver to use...
	I1128 00:43:37.111017   45815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:37.128450   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:37.144814   45815 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:37.144875   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:37.158185   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:37.170311   45815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:37.287910   45815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:37.414142   45815 docker.go:219] disabling docker service ...
	I1128 00:43:37.414222   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:37.427085   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:37.438631   45815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:37.559028   45815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:37.676646   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:37.689214   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:37.709298   45815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:37.709370   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.718368   45815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:37.718446   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.727188   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.736230   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.745594   45815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:37.755149   45815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:37.763179   45815 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:37.763237   45815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:37.780091   45815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:37.790861   45815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:37.923396   45815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:38.133933   45815 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:38.134013   45815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:38.143538   45815 start.go:540] Will wait 60s for crictl version
	I1128 00:43:38.143598   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.149212   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:38.205988   45815 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:38.206079   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.261211   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.315398   45815 out.go:177] * Preparing Kubernetes v1.29.0-rc.0 on CRI-O 1.24.1 ...
	I1128 00:43:38.317052   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:38.320262   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320708   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:38.320736   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320976   45815 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:38.325437   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:38.337411   45815 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 00:43:38.337457   45815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:38.384218   45815 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.0". assuming images are not preloaded.
	I1128 00:43:38.384245   45815 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.0 registry.k8s.io/kube-controller-manager:v1.29.0-rc.0 registry.k8s.io/kube-scheduler:v1.29.0-rc.0 registry.k8s.io/kube-proxy:v1.29.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:43:38.384325   45815 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.384533   45815 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.384553   45815 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1128 00:43:38.384634   45815 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.384726   45815 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.384817   45815 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.384870   45815 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.384931   45815 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.386318   45815 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.386368   45815 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1128 00:43:38.386381   45815 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.386373   45815 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.386324   45815 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.386316   45815 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.386319   45815 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.386326   45815 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.526945   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.527246   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1128 00:43:38.538042   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.538097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.539522   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.549538   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.557097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.621381   45815 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" does not exist at hash "4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9" in container runtime
	I1128 00:43:38.621440   45815 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.621516   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.208059   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting to get IP...
	I1128 00:43:38.209168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209599   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.209572   46749 retry.go:31] will retry after 256.562292ms: waiting for machine to come up
	I1128 00:43:38.468199   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468798   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468828   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.468722   46749 retry.go:31] will retry after 287.91937ms: waiting for machine to come up
	I1128 00:43:38.758157   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758610   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758640   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.758555   46749 retry.go:31] will retry after 377.696379ms: waiting for machine to come up
	I1128 00:43:39.138269   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138761   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138795   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.138706   46749 retry.go:31] will retry after 476.093256ms: waiting for machine to come up
	I1128 00:43:39.616256   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616611   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616638   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.616577   46749 retry.go:31] will retry after 628.654941ms: waiting for machine to come up
	I1128 00:43:40.246993   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247498   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247543   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.247455   46749 retry.go:31] will retry after 607.981973ms: waiting for machine to come up
	I1128 00:43:40.857220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857634   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857663   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.857592   46749 retry.go:31] will retry after 866.108704ms: waiting for machine to come up
	I1128 00:43:41.725140   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725695   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725716   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:41.725609   46749 retry.go:31] will retry after 1.158669064s: waiting for machine to come up
	I1128 00:43:37.777663   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.028441   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.028478   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.028492   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.043818   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.043846   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.544532   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.551469   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:42.551505   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.044055   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.050233   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:43.050262   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.544857   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.550155   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:43:43.558929   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:43:43.558962   45580 api_server.go:131] duration metric: took 5.781308354s to wait for apiserver health ...
	I1128 00:43:43.558974   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:43.558984   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:43.560872   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:43:38.775724   45815 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1128 00:43:38.775776   45815 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.775827   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.775953   45815 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1128 00:43:38.776035   45815 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" does not exist at hash "e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7" in container runtime
	I1128 00:43:38.776059   45815 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.776106   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776188   45815 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" does not exist at hash "e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4" in container runtime
	I1128 00:43:38.776220   45815 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.776247   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776315   45815 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.0" does not exist at hash "df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55" in container runtime
	I1128 00:43:38.776335   45815 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.776360   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776456   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.776562   45815 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.776601   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.792457   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.792533   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.792584   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.792634   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.792714   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.929517   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.929640   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.941438   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941544   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941623   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.941704   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.964773   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1128 00:43:38.964890   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:38.964980   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965038   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965118   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1128 00:43:38.965175   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:38.965250   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0 (exists)
	I1128 00:43:38.965262   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.965291   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.970386   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1128 00:43:38.970443   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0 (exists)
	I1128 00:43:38.970458   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0 (exists)
	I1128 00:43:38.974722   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1128 00:43:38.974970   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0 (exists)
	I1128 00:43:39.286976   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143462   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0: (2.178138495s)
	I1128 00:43:41.143491   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0 from cache
	I1128 00:43:41.143520   45815 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143536   45815 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.856517641s)
	I1128 00:43:41.143563   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143596   45815 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1128 00:43:41.143630   45815 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143678   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:43.335836   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.192246706s)
	I1128 00:43:43.335894   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1128 00:43:43.335859   45815 ssh_runner.go:235] Completed: which crictl: (2.192168329s)
	I1128 00:43:43.335938   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335970   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335971   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:42.886014   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886540   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886564   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:42.886457   46749 retry.go:31] will retry after 1.698662705s: waiting for machine to come up
	I1128 00:43:44.586452   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586892   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586917   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:44.586848   46749 retry.go:31] will retry after 1.681392058s: waiting for machine to come up
	I1128 00:43:46.270022   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270545   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270578   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:46.270491   46749 retry.go:31] will retry after 2.061464637s: waiting for machine to come up
	I1128 00:43:43.562274   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:43:43.583729   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:43:43.614704   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:43:43.627543   45580 system_pods.go:59] 8 kube-system pods found
	I1128 00:43:43.627587   45580 system_pods.go:61] "coredns-5dd5756b68-crmfq" [e412b41a-a4a4-4c8c-8fe9-b96c52e5815c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:43:43.627602   45580 system_pods.go:61] "etcd-embed-certs-304541" [ceeea55a-ffbb-4c18-b563-3552f8d47f3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:43:43.627622   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [e7bd6f60-fe90-4413-b906-0101ad3bda9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:43:43.627632   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [e083fd78-3aad-44ed-8bac-fc72eeded7f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:43:43.627652   45580 system_pods.go:61] "kube-proxy-6d4rt" [bc801fd6-e726-41d3-afcf-5ed86723dca9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:43:43.627665   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [df10b58f-43ec-4492-8d95-0d91ee88fec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:43:43.627676   45580 system_pods.go:61] "metrics-server-57f55c9bc5-sx4m7" [1618a041-6077-4076-8178-f2692dc983b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:43:43.627686   45580 system_pods.go:61] "storage-provisioner" [acaed13d-b10c-4fb6-b2b7-452cf928e1e5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:43:43.627696   45580 system_pods.go:74] duration metric: took 12.96707ms to wait for pod list to return data ...
	I1128 00:43:43.627709   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:43:43.632593   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:43:43.632628   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:43:43.632642   45580 node_conditions.go:105] duration metric: took 4.924217ms to run NodePressure ...
	I1128 00:43:43.632667   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:43.945692   45580 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950639   45580 kubeadm.go:787] kubelet initialised
	I1128 00:43:43.950666   45580 kubeadm.go:788] duration metric: took 4.940609ms waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950677   45580 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:43:43.956229   45580 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:45.975328   45580 pod_ready.go:102] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:46.036655   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0: (2.700640635s)
	I1128 00:43:46.036696   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0 from cache
	I1128 00:43:46.036721   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036786   45815 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.700708537s)
	I1128 00:43:46.036846   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1128 00:43:46.036792   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036943   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:48.418287   45815 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.381312759s)
	I1128 00:43:48.418326   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0: (2.381419374s)
	I1128 00:43:48.418339   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1128 00:43:48.418346   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0 from cache
	I1128 00:43:48.418370   45815 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.418426   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.333973   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334509   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:48.334432   46749 retry.go:31] will retry after 3.421790433s: waiting for machine to come up
	I1128 00:43:51.757991   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:51.758448   46749 retry.go:31] will retry after 3.726327818s: waiting for machine to come up
	I1128 00:43:48.484870   45580 pod_ready.go:92] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:48.484903   45580 pod_ready.go:81] duration metric: took 4.52864781s waiting for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:48.484916   45580 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006488   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.006516   45580 pod_ready.go:81] duration metric: took 521.591023ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006528   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014231   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.014258   45580 pod_ready.go:81] duration metric: took 7.721879ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014270   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:51.284611   45580 pod_ready.go:102] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:52.636848   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.218389263s)
	I1128 00:43:52.636883   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1128 00:43:52.636912   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:52.636964   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:56.745904   45269 start.go:369] acquired machines lock for "old-k8s-version-732472" in 56.827856444s
	I1128 00:43:56.745949   45269 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:56.745959   45269 fix.go:54] fixHost starting: 
	I1128 00:43:56.746379   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:56.746447   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:56.764386   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I1128 00:43:56.764907   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:56.765554   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:43:56.765584   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:56.766037   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:56.766221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:43:56.766365   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:43:56.768054   45269 fix.go:102] recreateIfNeeded on old-k8s-version-732472: state=Stopped err=<nil>
	I1128 00:43:56.768082   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	W1128 00:43:56.768219   45269 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:56.771618   45269 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-732472" ...
	I1128 00:43:55.486531   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487099   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Found IP for machine: 192.168.72.242
	I1128 00:43:55.487128   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserving static IP address...
	I1128 00:43:55.487158   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has current primary IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487539   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.487574   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | skip adding static IP to network mk-default-k8s-diff-port-488423 - found existing host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"}
	I1128 00:43:55.487595   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserved static IP address: 192.168.72.242
	I1128 00:43:55.487609   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for SSH to be available...
	I1128 00:43:55.487622   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Getting to WaitForSSH function...
	I1128 00:43:55.489858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490219   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.490253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490324   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH client type: external
	I1128 00:43:55.490373   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa (-rw-------)
	I1128 00:43:55.490414   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:55.490431   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | About to run SSH command:
	I1128 00:43:55.490447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | exit 0
	I1128 00:43:55.584551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:55.584987   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetConfigRaw
	I1128 00:43:55.585628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.588444   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.588889   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.588924   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.589207   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:43:55.589475   46126 machine.go:88] provisioning docker machine ...
	I1128 00:43:55.589501   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:55.589744   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590007   46126 buildroot.go:166] provisioning hostname "default-k8s-diff-port-488423"
	I1128 00:43:55.590031   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590203   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.592733   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593136   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.593170   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593313   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.593480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593756   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.593918   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.594316   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.594333   46126 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-488423 && echo "default-k8s-diff-port-488423" | sudo tee /etc/hostname
	I1128 00:43:55.739338   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-488423
	
	I1128 00:43:55.739368   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.742483   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.742870   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.742906   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.743009   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.743215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743365   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743512   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.743669   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.744119   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.744140   46126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-488423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-488423/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-488423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:55.883117   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:55.883146   46126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:55.883185   46126 buildroot.go:174] setting up certificates
	I1128 00:43:55.883198   46126 provision.go:83] configureAuth start
	I1128 00:43:55.883216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.883566   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.886292   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886625   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.886652   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886796   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.888873   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889213   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.889233   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889347   46126 provision.go:138] copyHostCerts
	I1128 00:43:55.889401   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:55.889413   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:55.889478   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:55.889611   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:55.889623   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:55.889650   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:55.889729   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:55.889738   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:55.889765   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:55.889848   46126 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-488423 san=[192.168.72.242 192.168.72.242 localhost 127.0.0.1 minikube default-k8s-diff-port-488423]
	I1128 00:43:55.945434   46126 provision.go:172] copyRemoteCerts
	I1128 00:43:55.945516   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:55.945547   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.948894   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949387   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.949422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949800   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.950023   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.950215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.950367   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.045647   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:56.069972   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1128 00:43:56.093947   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:56.118840   46126 provision.go:86] duration metric: configureAuth took 235.628083ms
	I1128 00:43:56.118867   46126 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:56.119072   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:56.119159   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.122135   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122514   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.122550   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122680   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.122884   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123076   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.123418   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.123729   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.123746   46126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:56.476330   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:56.476360   46126 machine.go:91] provisioned docker machine in 886.868182ms
	I1128 00:43:56.476384   46126 start.go:300] post-start starting for "default-k8s-diff-port-488423" (driver="kvm2")
	I1128 00:43:56.476399   46126 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:56.476422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.476787   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:56.476824   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.479803   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.480208   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480342   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.480542   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.480729   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.480901   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.574040   46126 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:56.578163   46126 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:56.578186   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:56.578247   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:56.578339   46126 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:56.578455   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:56.586455   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.613452   46126 start.go:303] post-start completed in 137.050871ms
	I1128 00:43:56.613484   46126 fix.go:56] fixHost completed within 19.807643021s
	I1128 00:43:56.613510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.616834   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.617253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.617686   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.617859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.618105   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.618302   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.618618   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.618630   46126 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:56.745691   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132236.690190729
	
	I1128 00:43:56.745711   46126 fix.go:206] guest clock: 1701132236.690190729
	I1128 00:43:56.745731   46126 fix.go:219] Guest: 2023-11-28 00:43:56.690190729 +0000 UTC Remote: 2023-11-28 00:43:56.613489194 +0000 UTC m=+194.421672716 (delta=76.701535ms)
	I1128 00:43:56.745784   46126 fix.go:190] guest clock delta is within tolerance: 76.701535ms
	I1128 00:43:56.745798   46126 start.go:83] releasing machines lock for "default-k8s-diff-port-488423", held for 19.939986738s
	I1128 00:43:56.745837   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.746091   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:56.749097   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749453   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.749486   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749648   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750192   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750392   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750446   46126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:56.750493   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.750661   46126 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:56.750685   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.753480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753655   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753948   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.753976   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754096   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754163   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.754191   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754241   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754327   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754474   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754489   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754621   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.754644   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754779   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.850794   46126 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:56.872044   46126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:57.016328   46126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:57.022389   46126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:57.022463   46126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:57.039925   46126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:57.039959   46126 start.go:472] detecting cgroup driver to use...
	I1128 00:43:57.040030   46126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:57.056385   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:57.068344   46126 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:57.068413   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:57.081752   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:57.095169   46126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:57.192392   46126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:56.772995   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Start
	I1128 00:43:56.773150   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring networks are active...
	I1128 00:43:56.774032   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network default is active
	I1128 00:43:56.774327   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network mk-old-k8s-version-732472 is active
	I1128 00:43:56.774732   45269 main.go:141] libmachine: (old-k8s-version-732472) Getting domain xml...
	I1128 00:43:56.775433   45269 main.go:141] libmachine: (old-k8s-version-732472) Creating domain...
	I1128 00:43:53.781169   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.781193   45580 pod_ready.go:81] duration metric: took 4.766915226s waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.781203   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789370   45580 pod_ready.go:92] pod "kube-proxy-6d4rt" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.789400   45580 pod_ready.go:81] duration metric: took 8.189391ms waiting for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789412   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794277   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.794299   45580 pod_ready.go:81] duration metric: took 4.87905ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794307   45580 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:55.984645   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:57.310000   46126 docker.go:219] disabling docker service ...
	I1128 00:43:57.310066   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:57.324484   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:57.339752   46126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:57.444051   46126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:57.557773   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:57.571662   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:57.591169   46126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:57.591230   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.605399   46126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:57.605462   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.617783   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.629258   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.639844   46126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:57.651810   46126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:57.663353   46126 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:57.663403   46126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:57.679095   46126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:57.688096   46126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:57.795868   46126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:57.970597   46126 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:57.970661   46126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:57.975830   46126 start.go:540] Will wait 60s for crictl version
	I1128 00:43:57.975900   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:43:57.980469   46126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:58.022819   46126 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:58.022932   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.078060   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.130219   46126 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:55.298307   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0: (2.661319898s)
	I1128 00:43:55.298330   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0 from cache
	I1128 00:43:55.298358   45815 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:55.298411   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:56.256987   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1128 00:43:56.257041   45815 cache_images.go:123] Successfully loaded all cached images
	I1128 00:43:56.257048   45815 cache_images.go:92] LoadImages completed in 17.872790347s
	I1128 00:43:56.257142   45815 ssh_runner.go:195] Run: crio config
	I1128 00:43:56.342206   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:43:56.342230   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:56.342248   45815 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:56.342265   45815 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.195 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473615 NodeName:no-preload-473615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:56.342421   45815 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473615"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:56.342519   45815 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:56.342581   45815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.0
	I1128 00:43:56.352200   45815 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:56.352275   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:56.360863   45815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1128 00:43:56.378620   45815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1128 00:43:56.396120   45815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1128 00:43:56.415090   45815 ssh_runner.go:195] Run: grep 192.168.61.195	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:56.419072   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:56.434497   45815 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615 for IP: 192.168.61.195
	I1128 00:43:56.434534   45815 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:56.434702   45815 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:56.434766   45815 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:56.434899   45815 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.key
	I1128 00:43:56.434990   45815 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key.6c770a2d
	I1128 00:43:56.435043   45815 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key
	I1128 00:43:56.435190   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:56.435231   45815 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:56.435249   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:56.435280   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:56.435317   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:56.435348   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:56.435402   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.436170   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:56.464712   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:43:56.492394   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:56.517331   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:43:56.540656   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:56.562997   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:56.587574   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:56.614358   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:56.640027   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:56.666632   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:56.690925   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:56.716816   45815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:56.734079   45815 ssh_runner.go:195] Run: openssl version
	I1128 00:43:56.739942   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:56.751230   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757607   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757662   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.764184   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:56.777196   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:56.788408   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793610   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793667   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.799203   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:56.809821   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:56.820489   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825268   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825335   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.830869   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:56.843707   45815 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:56.848717   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:56.855268   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:56.861889   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:56.867773   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:56.874642   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:56.882143   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:56.889812   45815 kubeadm.go:404] StartCluster: {Name:no-preload-473615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:56.889969   45815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:56.890021   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:56.931994   45815 cri.go:89] found id: ""
	I1128 00:43:56.932061   45815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:56.941996   45815 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:56.942014   45815 kubeadm.go:636] restartCluster start
	I1128 00:43:56.942074   45815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:56.950854   45815 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.951919   45815 kubeconfig.go:92] found "no-preload-473615" server: "https://192.168.61.195:8443"
	I1128 00:43:56.954777   45815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:56.963839   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.963902   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.974803   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.974821   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.974869   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.989023   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.489949   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.490022   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:57.501695   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.989930   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.990014   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.002435   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.489856   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.489946   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.506641   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.131523   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:58.134378   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.134826   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:58.134859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.135087   46126 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:58.139363   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:58.151488   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:58.151552   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:58.193551   46126 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:58.193618   46126 ssh_runner.go:195] Run: which lz4
	I1128 00:43:58.197624   46126 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:58.201842   46126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:58.201875   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:44:00.068140   46126 crio.go:444] Took 1.870561 seconds to copy over tarball
	I1128 00:44:00.068221   46126 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:58.122924   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting to get IP...
	I1128 00:43:58.123826   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.124165   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.124263   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.124146   46882 retry.go:31] will retry after 249.216665ms: waiting for machine to come up
	I1128 00:43:58.374969   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.375510   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.375537   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.375457   46882 retry.go:31] will retry after 317.223146ms: waiting for machine to come up
	I1128 00:43:58.694027   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.694483   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.694535   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.694443   46882 retry.go:31] will retry after 362.880377ms: waiting for machine to come up
	I1128 00:43:59.058976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.059623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.059650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.059571   46882 retry.go:31] will retry after 545.497342ms: waiting for machine to come up
	I1128 00:43:59.606962   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.607607   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.607633   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.607558   46882 retry.go:31] will retry after 678.467206ms: waiting for machine to come up
	I1128 00:44:00.287531   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:00.288062   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:00.288103   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:00.288054   46882 retry.go:31] will retry after 817.7633ms: waiting for machine to come up
	I1128 00:44:01.107179   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:01.107748   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:01.107776   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:01.107690   46882 retry.go:31] will retry after 1.02533736s: waiting for machine to come up
	I1128 00:44:02.134384   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:02.134940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:02.134972   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:02.134867   46882 retry.go:31] will retry after 1.291264059s: waiting for machine to come up
	I1128 00:43:58.491595   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:00.983179   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:58.989453   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.989568   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.006339   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.489912   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.490007   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.505297   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.989924   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.990020   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.004118   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.489346   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.489421   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.504026   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.989739   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.989828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.006279   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.489872   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.489975   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.504734   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.989185   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.989269   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.000313   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.489165   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.489246   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.505444   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.989956   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.990024   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.003038   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.489556   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.489663   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.502192   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.282407   46126 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.2141625s)
	I1128 00:44:03.282432   46126 crio.go:451] Took 3.214263 seconds to extract the tarball
	I1128 00:44:03.282440   46126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:03.324470   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:03.375858   46126 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:44:03.375881   46126 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:44:03.375944   46126 ssh_runner.go:195] Run: crio config
	I1128 00:44:03.440441   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:03.440462   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:03.440479   46126 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:03.440496   46126 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.242 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-488423 NodeName:default-k8s-diff-port-488423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:44:03.440670   46126 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.242
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-488423"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:03.440746   46126 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-488423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1128 00:44:03.440830   46126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:44:03.450060   46126 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:03.450138   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:03.458748   46126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1128 00:44:03.475315   46126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:03.492886   46126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1128 00:44:03.509665   46126 ssh_runner.go:195] Run: grep 192.168.72.242	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:03.513441   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:03.527336   46126 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423 for IP: 192.168.72.242
	I1128 00:44:03.527373   46126 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:03.527539   46126 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:03.527592   46126 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:03.527690   46126 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.key
	I1128 00:44:03.527770   46126 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key.05574f60
	I1128 00:44:03.527827   46126 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key
	I1128 00:44:03.527966   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:03.528009   46126 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:03.528024   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:03.528062   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:03.528098   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:03.528133   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:03.528188   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:03.528787   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:03.553210   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:03.578548   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:03.604661   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:03.627640   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:03.653147   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:03.681991   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:03.706068   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:03.730092   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:03.751326   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:03.776165   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:03.801844   46126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:03.819762   46126 ssh_runner.go:195] Run: openssl version
	I1128 00:44:03.826895   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:03.836806   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842921   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842983   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.848802   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:03.859065   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:03.869720   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874600   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874670   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.880712   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:03.891524   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:03.901286   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906102   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906163   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.911563   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:03.921606   46126 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:03.926553   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:03.932640   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:03.938482   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:03.944483   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:03.950430   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:03.956197   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:03.962543   46126 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:03.962647   46126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:03.962700   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:04.014418   46126 cri.go:89] found id: ""
	I1128 00:44:04.014499   46126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:04.024132   46126 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:04.024178   46126 kubeadm.go:636] restartCluster start
	I1128 00:44:04.024239   46126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:04.032856   46126 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.034010   46126 kubeconfig.go:92] found "default-k8s-diff-port-488423" server: "https://192.168.72.242:8444"
	I1128 00:44:04.036458   46126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:04.044461   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.044513   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.054697   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.054714   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.054759   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.066995   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.567687   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.567784   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.579528   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.067882   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.067970   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.082579   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.568116   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.568240   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.579606   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.067125   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.067229   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.078637   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.567159   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.567258   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.578623   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:07.067770   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.067864   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.081883   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.427919   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:03.428413   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:03.428442   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:03.428350   46882 retry.go:31] will retry after 1.150784696s: waiting for machine to come up
	I1128 00:44:04.580519   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:04.580976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:04.581008   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:04.580941   46882 retry.go:31] will retry after 1.981268381s: waiting for machine to come up
	I1128 00:44:06.564123   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:06.564623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:06.564641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:06.564596   46882 retry.go:31] will retry after 2.79895226s: waiting for machine to come up
	I1128 00:44:02.984445   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:05.483562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:03.989899   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.995828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.009197   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.489749   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.489829   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.501445   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.989934   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.990019   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.004077   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.489549   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.489634   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.501227   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.989858   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.989940   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.003151   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.489699   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.489785   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.502937   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.964667   45815 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:06.964705   45815 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:06.964720   45815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:06.964808   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:07.008487   45815 cri.go:89] found id: ""
	I1128 00:44:07.008572   45815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:07.028576   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:07.040057   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:07.040130   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050063   45815 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050085   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:07.199305   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.265283   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.065924411s)
	I1128 00:44:08.265324   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.468254   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.570027   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.650823   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:08.650900   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:08.667640   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:07.567667   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.567751   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.580778   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.067282   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.067368   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.080618   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.567146   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.567232   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.580324   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.067606   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.067728   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.083426   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.567987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.568084   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.579657   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.067205   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.067292   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.082466   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.568064   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.568159   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.583356   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.067987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.068114   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.084486   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.567945   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.568057   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.583108   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:12.068099   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.068186   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.079172   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.366118   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:09.366642   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:09.366677   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:09.366580   46882 retry.go:31] will retry after 2.538437833s: waiting for machine to come up
	I1128 00:44:11.906292   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:11.906799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:11.906823   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:11.906751   46882 retry.go:31] will retry after 4.351501946s: waiting for machine to come up
	I1128 00:44:07.983966   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.985333   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:12.483805   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.182449   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:09.681686   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.181905   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.681692   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.181652   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.209900   45815 api_server.go:72] duration metric: took 2.559073582s to wait for apiserver process to appear ...
	I1128 00:44:11.209935   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:11.209954   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.242230   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.242261   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.242276   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.285509   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.285538   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.786232   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.791529   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:15.791565   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.285909   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.290996   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:16.291040   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.786199   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.792488   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:44:16.805778   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:44:16.805807   45815 api_server.go:131] duration metric: took 5.595863517s to wait for apiserver health ...
	I1128 00:44:16.805817   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:44:16.805825   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:16.807924   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:12.567969   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.568085   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.579496   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.068092   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.068164   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.079081   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.567677   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.567773   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.579000   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:14.044782   46126 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:14.044818   46126 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:14.044832   46126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:14.044927   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:14.090411   46126 cri.go:89] found id: ""
	I1128 00:44:14.090487   46126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:14.106216   46126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:14.116309   46126 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:14.116367   46126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125060   46126 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125082   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.259194   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.923712   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.113501   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.221455   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.317171   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:15.317269   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.332625   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.847268   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.347347   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.847441   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.259741   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260326   45269 main.go:141] libmachine: (old-k8s-version-732472) Found IP for machine: 192.168.39.172
	I1128 00:44:16.260347   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserving static IP address...
	I1128 00:44:16.260368   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has current primary IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.260978   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | skip adding static IP to network mk-old-k8s-version-732472 - found existing host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"}
	I1128 00:44:16.261003   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Getting to WaitForSSH function...
	I1128 00:44:16.261021   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserved static IP address: 192.168.39.172
	I1128 00:44:16.261037   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting for SSH to be available...
	I1128 00:44:16.264000   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264370   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.264402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264496   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH client type: external
	I1128 00:44:16.264560   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa (-rw-------)
	I1128 00:44:16.264600   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:44:16.264624   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | About to run SSH command:
	I1128 00:44:16.264641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | exit 0
	I1128 00:44:16.373651   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | SSH cmd err, output: <nil>: 
	I1128 00:44:16.374185   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetConfigRaw
	I1128 00:44:16.374992   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.378530   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.378958   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.378987   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.379390   45269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/config.json ...
	I1128 00:44:16.379622   45269 machine.go:88] provisioning docker machine ...
	I1128 00:44:16.379646   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:16.379854   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380005   45269 buildroot.go:166] provisioning hostname "old-k8s-version-732472"
	I1128 00:44:16.380024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380152   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.382908   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383346   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.383376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383604   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.383824   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384179   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.384365   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.384875   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.384902   45269 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-732472 && echo "old-k8s-version-732472" | sudo tee /etc/hostname
	I1128 00:44:16.547302   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-732472
	
	I1128 00:44:16.547378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.550883   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551409   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.551448   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551634   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.551888   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552113   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552258   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.552465   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.552965   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.552994   45269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-732472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-732472/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-732472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:44:16.705539   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:44:16.705577   45269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:44:16.705601   45269 buildroot.go:174] setting up certificates
	I1128 00:44:16.705611   45269 provision.go:83] configureAuth start
	I1128 00:44:16.705622   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.705962   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.708726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709231   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.709283   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709531   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.712023   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712491   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.712524   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712658   45269 provision.go:138] copyHostCerts
	I1128 00:44:16.712720   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:44:16.712734   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:44:16.712835   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:44:16.712990   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:44:16.713005   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:44:16.713041   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:44:16.713154   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:44:16.713168   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:44:16.713201   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:44:16.713291   45269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-732472 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube old-k8s-version-732472]
	I1128 00:44:17.255079   45269 provision.go:172] copyRemoteCerts
	I1128 00:44:17.255157   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:44:17.255184   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.258078   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258486   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.258522   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258704   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.258892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.259071   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.259278   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.360891   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:44:14.981992   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.984334   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.809569   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:16.837545   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:16.884377   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:16.901252   45815 system_pods.go:59] 9 kube-system pods found
	I1128 00:44:16.901296   45815 system_pods.go:61] "coredns-76f75df574-54p94" [fc2580d3-8c03-46c8-aa43-fce9472a4bbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901310   45815 system_pods.go:61] "coredns-76f75df574-9ptz7" [c51a1796-37bb-411b-8477-fb4d8c7e7cb2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901322   45815 system_pods.go:61] "etcd-no-preload-473615" [c789418f-23b1-4e84-95df-e339afc358e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:16.901337   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [204c5f02-7e14-4761-9af0-606f227dee63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:16.901351   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [2d96a78f-b0c9-4731-a8a1-ec63459a09ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:16.901368   45815 system_pods.go:61] "kube-proxy-trr4j" [df593d3d-db4c-45f9-ad79-f35fe2cdef84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:16.901379   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [5fe2c87b-af8b-4184-8b62-399e488dcb5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:16.901393   45815 system_pods.go:61] "metrics-server-57f55c9bc5-lh4m8" [4c3ae55b-befb-44d2-8982-592acdf3eab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:16.901408   45815 system_pods.go:61] "storage-provisioner" [a3e71dd4-570e-4895-aac4-d98dfbd69a6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:16.901423   45815 system_pods.go:74] duration metric: took 17.023663ms to wait for pod list to return data ...
	I1128 00:44:16.901434   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:16.905738   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:16.905766   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:16.905776   45815 node_conditions.go:105] duration metric: took 4.335236ms to run NodePressure ...
	I1128 00:44:16.905791   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:17.532813   45815 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548788   45815 kubeadm.go:787] kubelet initialised
	I1128 00:44:17.548814   45815 kubeadm.go:788] duration metric: took 15.969396ms waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548824   45815 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:17.569590   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:17.388160   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:44:17.415589   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:44:17.443880   45269 provision.go:86] duration metric: configureAuth took 738.257631ms
	I1128 00:44:17.443913   45269 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:44:17.444142   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:44:17.444240   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.447355   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447699   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.447726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447980   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.448213   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448382   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448542   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.448730   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.449148   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.449173   45269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:44:17.825162   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:44:17.825202   45269 machine.go:91] provisioned docker machine in 1.445550198s
	I1128 00:44:17.825215   45269 start.go:300] post-start starting for "old-k8s-version-732472" (driver="kvm2")
	I1128 00:44:17.825229   45269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:44:17.825255   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:17.825631   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:44:17.825665   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.829047   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.829813   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829885   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.830108   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.830270   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.830427   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.933926   45269 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:44:17.939164   45269 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:44:17.939192   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:44:17.939273   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:44:17.939364   45269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:44:17.939481   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:44:17.950901   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:17.983827   45269 start.go:303] post-start completed in 158.593642ms
	I1128 00:44:17.983856   45269 fix.go:56] fixHost completed within 21.237897087s
	I1128 00:44:17.983880   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.988473   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.988983   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.989011   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.989353   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.989611   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989755   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989981   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.990202   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.990729   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.990748   45269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:44:18.139114   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132258.087547922
	
	I1128 00:44:18.139142   45269 fix.go:206] guest clock: 1701132258.087547922
	I1128 00:44:18.139154   45269 fix.go:219] Guest: 2023-11-28 00:44:18.087547922 +0000 UTC Remote: 2023-11-28 00:44:17.983860571 +0000 UTC m=+360.654750753 (delta=103.687351ms)
	I1128 00:44:18.139206   45269 fix.go:190] guest clock delta is within tolerance: 103.687351ms
	I1128 00:44:18.139217   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 21.393285553s
	I1128 00:44:18.139256   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.139552   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:18.142899   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.143407   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143562   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144123   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144308   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144414   45269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:44:18.144473   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.144586   45269 ssh_runner.go:195] Run: cat /version.json
	I1128 00:44:18.144614   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.147761   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.147994   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148459   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148542   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148581   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148605   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148878   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.148892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.149080   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149094   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149266   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149288   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149473   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.149488   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.271569   45269 ssh_runner.go:195] Run: systemctl --version
	I1128 00:44:18.277814   45269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:44:18.432301   45269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:44:18.438677   45269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:44:18.438749   45269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:44:18.455128   45269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:44:18.455155   45269 start.go:472] detecting cgroup driver to use...
	I1128 00:44:18.455229   45269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:44:18.472928   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:44:18.490329   45269 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:44:18.490409   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:44:18.505705   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:44:18.523509   45269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:44:18.696691   45269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:44:18.829641   45269 docker.go:219] disabling docker service ...
	I1128 00:44:18.829775   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:44:18.847903   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:44:18.863690   45269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:44:19.002181   45269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:44:19.130955   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:44:19.146034   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:44:19.165714   45269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 00:44:19.165790   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.176303   45269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:44:19.176368   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.186698   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.196137   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.205054   45269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:44:19.215067   45269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:44:19.224332   45269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:44:19.224376   45269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:44:19.238079   45269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:44:19.246692   45269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:44:19.360913   45269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:44:19.548488   45269 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:44:19.548563   45269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:44:19.553293   45269 start.go:540] Will wait 60s for crictl version
	I1128 00:44:19.553362   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:19.557103   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:44:19.605572   45269 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:44:19.605662   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.655808   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.709415   45269 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1128 00:44:17.346814   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.847354   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.878161   46126 api_server.go:72] duration metric: took 2.560990106s to wait for apiserver process to appear ...
	I1128 00:44:17.878189   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:17.878218   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.878696   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:17.878732   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.879110   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:18.379800   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:19.710653   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:19.713912   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714358   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:19.714402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714586   45269 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:44:19.719516   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:19.736367   45269 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 00:44:19.736422   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:19.788917   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:19.789021   45269 ssh_runner.go:195] Run: which lz4
	I1128 00:44:19.793502   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:44:19.797933   45269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:44:19.797967   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1128 00:44:21.595649   45269 crio.go:444] Took 1.802185 seconds to copy over tarball
	I1128 00:44:21.595754   45269 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:44:19.483696   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:21.485632   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:19.612824   45815 pod_ready.go:102] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:22.111469   45815 pod_ready.go:92] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.111506   45815 pod_ready.go:81] duration metric: took 4.541884744s waiting for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.111522   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118896   45815 pod_ready.go:92] pod "coredns-76f75df574-9ptz7" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.118916   45815 pod_ready.go:81] duration metric: took 7.386009ms waiting for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118925   45815 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.651574   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.651606   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.651632   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.731086   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.731124   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.879396   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.889686   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:22.889721   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.380219   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.387416   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.387458   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.880170   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.886215   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.886286   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:24.380095   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:24.387531   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:44:24.411131   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:44:24.411169   46126 api_server.go:131] duration metric: took 6.532961174s to wait for apiserver health ...
	I1128 00:44:24.411180   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:24.411186   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:24.701599   46126 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:24.853101   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:24.878687   46126 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:24.924669   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:24.942030   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:44:24.942063   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:24.942074   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:24.942084   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:24.942094   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:24.942104   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:24.942115   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:24.942134   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:24.942152   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:24.942163   46126 system_pods.go:74] duration metric: took 17.475554ms to wait for pod list to return data ...
	I1128 00:44:24.942224   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:26.037379   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:26.037423   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:26.037450   46126 node_conditions.go:105] duration metric: took 1.095218932s to run NodePressure ...
	I1128 00:44:26.037473   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:27.084620   46126 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.047120714s)
	I1128 00:44:27.084659   46126 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100248   46126 kubeadm.go:787] kubelet initialised
	I1128 00:44:27.100282   46126 kubeadm.go:788] duration metric: took 15.606572ms waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100293   46126 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:27.108069   46126 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.117188   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117221   46126 pod_ready.go:81] duration metric: took 9.127662ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.117238   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117247   46126 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.123182   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123213   46126 pod_ready.go:81] duration metric: took 5.9547ms waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.123226   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123235   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.130170   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130196   46126 pod_ready.go:81] duration metric: took 6.952194ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.130209   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130216   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.136895   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136925   46126 pod_ready.go:81] duration metric: took 6.699975ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.136940   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136950   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:24.811723   45269 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.215918902s)
	I1128 00:44:24.811757   45269 crio.go:451] Took 3.216081 seconds to extract the tarball
	I1128 00:44:24.811769   45269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:24.856120   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:24.918138   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:24.918185   45269 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:44:24.918257   45269 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.918296   45269 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.918305   45269 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1128 00:44:24.918314   45269 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.918297   45269 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.918261   45269 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.918264   45269 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.918585   45269 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.919955   45269 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.919959   45269 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.919988   45269 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.919964   45269 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.920093   45269 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.920302   45269 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.920482   45269 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.920497   45269 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1128 00:44:25.041095   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.048823   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.071401   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1128 00:44:25.073489   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.081089   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.083887   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.100582   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.150855   45269 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1128 00:44:25.150909   45269 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.150960   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.151148   45269 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1128 00:44:25.151198   45269 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.151250   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.181984   45269 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1128 00:44:25.182039   45269 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1128 00:44:25.182089   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.260634   45269 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1128 00:44:25.260687   45269 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.260744   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269386   45269 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1128 00:44:25.269436   45269 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1128 00:44:25.269460   45269 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.269480   45269 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.269508   45269 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1128 00:44:25.269517   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269539   45269 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.269552   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269573   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269626   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.269642   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.269701   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1128 00:44:25.269733   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.368354   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1128 00:44:25.368405   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1128 00:44:25.368462   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1128 00:44:25.368474   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.368536   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.368537   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.375204   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.375378   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1128 00:44:25.439797   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1128 00:44:25.465699   45269 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1128 00:44:25.465731   45269 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465788   45269 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465795   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1128 00:44:25.465810   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1128 00:44:25.797872   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:27.031275   45269 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.233351991s)
	I1128 00:44:27.031525   45269 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.565711109s)
	I1128 00:44:27.031549   45269 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1128 00:44:27.031594   45269 cache_images.go:92] LoadImages completed in 2.113388877s
	W1128 00:44:27.031667   45269 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1128 00:44:27.031754   45269 ssh_runner.go:195] Run: crio config
	I1128 00:44:27.100851   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:27.100882   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:27.100901   45269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:27.100924   45269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-732472 NodeName:old-k8s-version-732472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1128 00:44:27.101119   45269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-732472"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-732472
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.172:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:27.101241   45269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-732472 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:44:27.101312   45269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1128 00:44:27.111964   45269 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:27.112049   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:27.122796   45269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1128 00:44:27.149768   45269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:27.168520   45269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1128 00:44:27.187296   45269 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:27.191606   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:27.205482   45269 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472 for IP: 192.168.39.172
	I1128 00:44:27.205521   45269 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:27.205720   45269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:27.205758   45269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:27.205825   45269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.key
	I1128 00:44:27.205885   45269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key.ee96354a
	I1128 00:44:27.205931   45269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key
	I1128 00:44:27.206060   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:27.206115   45269 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:27.206130   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:27.206176   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:27.206214   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:27.206251   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:27.206313   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:27.207009   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:27.233932   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:27.258138   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:27.282203   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:27.309304   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:27.335945   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:27.360118   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:23.984808   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.118398   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:27.491683   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491724   46126 pod_ready.go:81] duration metric: took 354.756767ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.491736   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491745   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.890269   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890299   46126 pod_ready.go:81] duration metric: took 398.544263ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.890316   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890324   46126 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.289016   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289043   46126 pod_ready.go:81] duration metric: took 398.709637ms waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:28.289055   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289062   46126 pod_ready.go:38] duration metric: took 1.188759196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:28.289084   46126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:44:28.301648   46126 ops.go:34] apiserver oom_adj: -16
	I1128 00:44:28.301676   46126 kubeadm.go:640] restartCluster took 24.277487612s
	I1128 00:44:28.301683   46126 kubeadm.go:406] StartCluster complete in 24.339149368s
	I1128 00:44:28.301697   46126 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.301770   46126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:44:28.303560   46126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.303802   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:44:28.303915   46126 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:44:28.303994   46126 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304023   46126 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304038   46126 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:44:28.304040   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:44:28.304063   46126 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304117   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304118   46126 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304134   46126 addons.go:240] addon metrics-server should already be in state true
	I1128 00:44:28.304220   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304547   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304589   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304669   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304741   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304928   46126 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304956   46126 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-488423"
	I1128 00:44:28.305388   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.305437   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.310450   46126 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-488423" context rescaled to 1 replicas
	I1128 00:44:28.310496   46126 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:44:28.312602   46126 out.go:177] * Verifying Kubernetes components...
	I1128 00:44:28.314027   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:44:28.321407   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I1128 00:44:28.321423   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I1128 00:44:28.322247   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322287   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322797   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322820   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.322942   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322968   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.323210   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323242   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I1128 00:44:28.323323   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323556   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.323775   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323818   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323857   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323891   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323937   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.323957   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.324293   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.324471   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.327954   46126 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.327972   46126 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:44:28.327993   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.328327   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.328355   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.342376   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I1128 00:44:28.342789   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.343325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.343366   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.343751   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.343978   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I1128 00:44:28.343995   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.344392   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.344983   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.345009   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.345366   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.345910   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.348242   46126 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:28.346449   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I1128 00:44:28.350126   46126 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.350147   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:44:28.350166   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.346666   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.350250   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.348589   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.350911   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.350930   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.351442   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.351817   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.353691   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.353876   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.355460   46126 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:44:24.141365   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.518655   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.887843   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.887877   45815 pod_ready.go:81] duration metric: took 4.768943982s waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.887891   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909504   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.909600   45815 pod_ready.go:81] duration metric: took 21.699474ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909627   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.354335   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.354504   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.357068   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:44:28.357088   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:44:28.357094   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.357109   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.357228   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.357356   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.357475   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.360015   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360725   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.360785   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360994   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.361177   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.361341   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.361503   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.368150   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I1128 00:44:28.368511   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.369005   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.369023   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.369326   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.369481   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.370807   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.371066   46126 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.371078   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:44:28.371092   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.373819   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374409   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.374510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.374541   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374602   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.374688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.374768   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.474380   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.505183   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:44:28.505206   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:44:28.536550   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.584832   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:44:28.584857   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:44:28.626477   46126 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 00:44:28.626473   46126 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:28.644406   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:28.644436   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:44:28.671872   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:29.867337   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330746736s)
	I1128 00:44:29.867437   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867451   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867490   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.393076585s)
	I1128 00:44:29.867532   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867553   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867827   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.867841   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.867850   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867988   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868006   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868029   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.868038   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.868129   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.868145   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868159   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868381   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868400   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868429   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.876482   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.876505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.876724   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.876736   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885484   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213575767s)
	I1128 00:44:29.885534   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885841   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.885862   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885873   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885883   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885887   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886153   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886164   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.886194   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.886211   46126 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-488423"
	I1128 00:44:29.889173   46126 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:44:29.890607   46126 addons.go:502] enable addons completed in 1.586699021s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:44:30.716680   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.385529   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:27.411354   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:27.439142   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:27.466763   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:27.497738   45269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:27.518132   45269 ssh_runner.go:195] Run: openssl version
	I1128 00:44:27.524720   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:27.537673   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542561   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542623   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.548137   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:27.558112   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:27.568318   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573638   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573697   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.579739   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:27.589908   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:27.599937   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606264   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606340   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.612850   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:27.623388   45269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:27.628140   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:27.634670   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:27.642071   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:27.650207   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:27.656836   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:27.662837   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:27.668909   45269 kubeadm.go:404] StartCluster: {Name:old-k8s-version-732472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:27.669005   45269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:27.669075   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:27.711918   45269 cri.go:89] found id: ""
	I1128 00:44:27.711993   45269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:27.722058   45269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:27.722084   45269 kubeadm.go:636] restartCluster start
	I1128 00:44:27.722146   45269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:27.731619   45269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.733224   45269 kubeconfig.go:92] found "old-k8s-version-732472" server: "https://192.168.39.172:8443"
	I1128 00:44:27.736867   45269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:27.747794   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.747862   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.762055   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.762079   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.762146   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.773241   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.273910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.274001   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.286159   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.773393   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.773492   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.785781   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.274130   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.274199   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.289388   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.773916   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.774022   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.789483   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.273920   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.274026   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.285579   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.773910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.774005   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.785536   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.273906   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.273977   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.285344   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.774284   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.774352   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.786435   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.273928   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.274008   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.289424   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.484735   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.983088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:28.945293   45815 pod_ready.go:102] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.445111   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.445133   45815 pod_ready.go:81] duration metric: took 3.535488087s waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.445143   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450322   45815 pod_ready.go:92] pod "kube-proxy-trr4j" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.450342   45815 pod_ready.go:81] duration metric: took 5.193276ms waiting for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450350   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455002   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.455021   45815 pod_ready.go:81] duration metric: took 4.664949ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455030   45815 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:32.915566   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.717086   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:33.216905   46126 node_ready.go:49] node "default-k8s-diff-port-488423" has status "Ready":"True"
	I1128 00:44:33.216930   46126 node_ready.go:38] duration metric: took 4.590426391s waiting for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:33.216938   46126 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:33.223257   46126 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744567   46126 pod_ready.go:92] pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:33.744592   46126 pod_ready.go:81] duration metric: took 521.313062ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744601   46126 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:35.763867   46126 pod_ready.go:102] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.773549   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.773643   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.785461   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.273911   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.273994   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.285646   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.773944   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.774046   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.786576   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.273902   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.273969   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.285791   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.773895   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.773965   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.785934   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.273675   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.273738   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.285549   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.773954   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.774041   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.786010   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.273591   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.273659   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.284794   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.773864   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.773931   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.786610   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:37.273899   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:37.274025   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:37.285678   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.983159   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:34.985149   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.482210   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:35.413821   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.417790   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.768358   46126 pod_ready.go:92] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.768398   46126 pod_ready.go:81] duration metric: took 4.023788643s waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.768411   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775805   46126 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.775835   46126 pod_ready.go:81] duration metric: took 7.41435ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775847   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788110   46126 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.788139   46126 pod_ready.go:81] duration metric: took 12.28235ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788151   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018402   46126 pod_ready.go:92] pod "kube-proxy-2sfbm" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.018426   46126 pod_ready.go:81] duration metric: took 230.267334ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018443   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818531   46126 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.818559   46126 pod_ready.go:81] duration metric: took 800.108369ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818572   46126 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:41.127953   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.748214   45269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:37.748260   45269 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:37.748276   45269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:37.748334   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:37.796781   45269 cri.go:89] found id: ""
	I1128 00:44:37.796866   45269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:37.814832   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:37.824395   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:37.824469   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833592   45269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833618   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:37.955071   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:38.939529   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.160852   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.243789   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.372434   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:39.372525   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.405594   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.927024   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.426600   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.927163   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.966905   45269 api_server.go:72] duration metric: took 1.594470962s to wait for apiserver process to appear ...
	I1128 00:44:40.966937   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:40.966959   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967412   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:40.967457   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967851   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:41.468536   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:39.483204   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:41.483578   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:39.914738   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:42.415305   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:43.130157   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:45.628970   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.468813   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1128 00:44:46.468859   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:43.984318   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.483855   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:44.914911   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.415274   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.435553   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.435586   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.435601   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.480977   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.481002   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.481012   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.506064   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.506098   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.968355   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.974731   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:47.974766   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.468954   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.484597   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:48.484627   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.968810   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.979310   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:44:48.987751   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:44:48.987782   45269 api_server.go:131] duration metric: took 8.020836981s to wait for apiserver health ...
	I1128 00:44:48.987793   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:48.987801   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:48.989720   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:48.129394   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:50.130239   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:48.991320   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:49.001231   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:49.019895   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:49.027389   45269 system_pods.go:59] 7 kube-system pods found
	I1128 00:44:49.027417   45269 system_pods.go:61] "coredns-5644d7b6d9-9sh7z" [dcc226fb-5fd9-4757-bd93-1113f185cdce] Running
	I1128 00:44:49.027422   45269 system_pods.go:61] "etcd-old-k8s-version-732472" [a5899a5a-4812-41e1-9251-78fdaeea9597] Running
	I1128 00:44:49.027428   45269 system_pods.go:61] "kube-apiserver-old-k8s-version-732472" [13d2df8c-84a3-4bd4-8eab-ed9f732a3839] Running
	I1128 00:44:49.027435   45269 system_pods.go:61] "kube-controller-manager-old-k8s-version-732472" [6dc1e479-1a3a-4b9e-acd6-1183a25aece4] Running
	I1128 00:44:49.027441   45269 system_pods.go:61] "kube-proxy-jqrks" [e8fd665a-099e-4941-a8f2-917d2b864eeb] Running
	I1128 00:44:49.027447   45269 system_pods.go:61] "kube-scheduler-old-k8s-version-732472" [de147a31-927e-4051-b6ae-05ddf59290c8] Running
	I1128 00:44:49.027457   45269 system_pods.go:61] "storage-provisioner" [8d7e725e-6c26-4435-8605-88c7d924f5ca] Running
	I1128 00:44:49.027469   45269 system_pods.go:74] duration metric: took 7.544096ms to wait for pod list to return data ...
	I1128 00:44:49.027479   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:49.032133   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:49.032170   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:49.032183   45269 node_conditions.go:105] duration metric: took 4.695493ms to run NodePressure ...
	I1128 00:44:49.032203   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:49.293443   45269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:49.297880   45269 retry.go:31] will retry after 216.894607ms: kubelet not initialised
	I1128 00:44:49.528912   45269 retry.go:31] will retry after 354.406288ms: kubelet not initialised
	I1128 00:44:49.897328   45269 retry.go:31] will retry after 462.959721ms: kubelet not initialised
	I1128 00:44:50.368260   45269 retry.go:31] will retry after 930.99638ms: kubelet not initialised
	I1128 00:44:51.303993   45269 retry.go:31] will retry after 1.275477572s: kubelet not initialised
	I1128 00:44:48.984387   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:51.482900   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:49.916072   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.415253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.626182   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.626822   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:56.627881   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.584797   45269 retry.go:31] will retry after 2.542158001s: kubelet not initialised
	I1128 00:44:55.132600   45269 retry.go:31] will retry after 1.850404606s: kubelet not initialised
	I1128 00:44:56.987924   45269 retry.go:31] will retry after 2.371310185s: kubelet not initialised
	I1128 00:44:53.483557   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:55.982236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.916135   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:57.415818   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.127409   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:01.629561   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.366141   45269 retry.go:31] will retry after 8.068803464s: kubelet not initialised
	I1128 00:44:57.983189   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:00.482336   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.483708   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.915991   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.414672   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.127296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.127766   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.484008   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.983257   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.415147   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.914282   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.128322   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:10.627792   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:07.439538   45269 retry.go:31] will retry after 10.31431504s: kubelet not initialised
	I1128 00:45:08.985186   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.481933   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.914385   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.414899   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:12.628874   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:14.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.126592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.487653   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.983710   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.915497   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.915686   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:18.416396   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:19.127337   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:21.128352   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.759682   45269 retry.go:31] will retry after 12.137072248s: kubelet not initialised
	I1128 00:45:18.482187   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.982360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.915228   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.918669   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:23.630252   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.128326   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.982597   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:24.983348   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.985418   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:25.415620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:27.914150   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:28.626533   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:30.633655   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.902379   45269 kubeadm.go:787] kubelet initialised
	I1128 00:45:29.902403   45269 kubeadm.go:788] duration metric: took 40.608931816s waiting for restarted kubelet to initialise ...
	I1128 00:45:29.902410   45269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:45:29.908442   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914018   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.914055   45269 pod_ready.go:81] duration metric: took 5.584146ms waiting for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914069   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918699   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.918720   45269 pod_ready.go:81] duration metric: took 4.644035ms waiting for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918729   45269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922818   45269 pod_ready.go:92] pod "etcd-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.922837   45269 pod_ready.go:81] duration metric: took 4.102217ms waiting for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922846   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927182   45269 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.927208   45269 pod_ready.go:81] duration metric: took 4.354519ms waiting for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927220   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301553   45269 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.301583   45269 pod_ready.go:81] duration metric: took 374.352863ms waiting for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301611   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700858   45269 pod_ready.go:92] pod "kube-proxy-jqrks" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.700879   45269 pod_ready.go:81] duration metric: took 399.260896ms waiting for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700890   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103319   45269 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:31.103340   45269 pod_ready.go:81] duration metric: took 402.442769ms waiting for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103349   45269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.482088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:31.483235   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.915117   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:32.416142   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.127196   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.127500   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.128846   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.422466   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.908596   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.983360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.983776   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:34.417575   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:36.915005   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.627473   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.126292   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.908783   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.909842   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.910185   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:38.481697   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:40.481935   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.483458   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.415244   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.127088   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.128254   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.409802   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.415828   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.986515   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:47.483162   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.414253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.416386   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.628705   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:51.130754   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.908171   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.910974   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:49.985617   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:52.483720   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.915063   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.915382   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.414813   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.627668   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.409415   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.420993   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:54.983055   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:56.983251   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.919627   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.415481   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.129666   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.629368   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:57.910151   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.408805   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:59.485375   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:01.983754   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.413478   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.129933   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.627697   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:02.410888   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.910323   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.482593   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:06.981922   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.414437   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.415659   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.628741   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.126717   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:12.127246   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.408374   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.411381   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.416658   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:08.982790   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.984134   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.914828   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.915812   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.135673   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.626139   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.909480   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.409873   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.481792   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:15.482823   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.416315   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.914123   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.628828   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.131592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.411060   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.910071   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:17.983098   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.482047   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:22.483266   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:19.413826   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.415442   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.626664   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.626823   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.424355   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.908255   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:24.984606   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.482265   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.915227   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:26.417059   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.628773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.126818   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.911487   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.409652   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:29.485507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.983913   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:28.916438   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.415565   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.626887   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.628401   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.128691   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.910776   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.421469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.482605   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:36.982844   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:33.913533   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.914337   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.914708   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.627072   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.627591   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.908233   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.910199   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:38.983620   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.482862   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.914965   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.915003   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.628492   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.127393   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:42.408895   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:44.409264   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.909077   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.483111   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:45.483236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.916039   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.415407   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.627253   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.127503   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.418512   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.427899   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:47.982977   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.983264   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.483168   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.914124   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:50.915620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.919567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.627296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.627334   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.908531   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:56.408610   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:54.983084   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.481889   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.414154   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.416518   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.126605   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.127372   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:02.127896   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.410152   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.910206   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.482177   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.982997   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.915381   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.915574   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.626760   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.628849   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.417243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.417887   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.983490   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.984161   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.414677   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.420179   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:09.127843   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:11.626987   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:07.908838   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.408385   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.482404   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.484146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.914093   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.922145   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.417231   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.627586   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.628294   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.410728   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.910177   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:16.910469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.982123   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.984037   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:17.483771   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.915323   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.415070   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.129617   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.628266   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.423065   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:21.908978   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.983122   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.482857   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.415232   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.915218   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.129285   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:25.627839   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.910794   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:26.409956   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.985146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.482512   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.916041   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.415836   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.627978   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.127213   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:32.127569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:28.413035   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.909092   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.483528   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.983745   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.913604   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.914567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.129952   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.626951   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:33.414345   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:35.414559   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.481916   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.482024   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.413520   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.414517   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.416081   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.627773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:41.126690   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:37.414665   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:39.908876   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.482323   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.983125   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.914615   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.415528   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.128692   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.627228   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:42.412788   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:44.909732   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:46.910133   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.482424   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.482507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.482562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.416841   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.914229   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:48.127074   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.627355   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.411030   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.420657   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.483765   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.982325   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.414235   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.414715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.627557   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:54.628111   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:57.129482   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.910232   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.409320   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.795074   45580 pod_ready.go:81] duration metric: took 4m0.000752019s waiting for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	E1128 00:47:53.795108   45580 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:47:53.795124   45580 pod_ready.go:38] duration metric: took 4m9.844437599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:47:53.795148   45580 kubeadm.go:640] restartCluster took 4m29.759592783s
	W1128 00:47:53.795209   45580 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:47:53.795237   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:47:54.416610   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.915781   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:59.129569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.627046   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.409599   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:00.409906   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.916155   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.416966   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:03.627676   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.126607   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:02.410451   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:04.411074   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.912243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.609428   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.814163406s)
	I1128 00:48:07.609508   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:07.624300   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:07.634606   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:07.644733   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:07.644802   45580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:03.915780   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.416602   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:08.128657   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:10.629487   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:09.411193   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.908147   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.867577   45580 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:08.915404   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.416668   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.129233   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.630498   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.909762   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:16.409160   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.916628   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.916715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:17.917022   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.126081   45580 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 00:48:19.126157   45580 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:19.126245   45580 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:19.126356   45580 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:19.126476   45580 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:19.126544   45580 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:19.128354   45580 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:19.128461   45580 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:19.128546   45580 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:19.128664   45580 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:19.128807   45580 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:19.128927   45580 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:19.129001   45580 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:19.129100   45580 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:19.129175   45580 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:19.129275   45580 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:19.129387   45580 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:19.129432   45580 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:19.129501   45580 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:19.129559   45580 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:19.129616   45580 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:19.129696   45580 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:19.129760   45580 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:19.129853   45580 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:19.129921   45580 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:19.131350   45580 out.go:204]   - Booting up control plane ...
	I1128 00:48:19.131462   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:19.131578   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:19.131674   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:19.131798   45580 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:19.131914   45580 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:19.131972   45580 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:19.132149   45580 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:19.132245   45580 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502916 seconds
	I1128 00:48:19.132388   45580 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:19.132540   45580 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:19.132619   45580 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:19.132850   45580 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-304541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:19.132959   45580 kubeadm.go:322] [bootstrap-token] Using token: tbyyd7.r005gkl9z2ll5pno
	I1128 00:48:19.134488   45580 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:19.134603   45580 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:19.134691   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:19.134841   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:19.135030   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:19.135200   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:19.135311   45580 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:19.135453   45580 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:19.135532   45580 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:19.135600   45580 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:19.135611   45580 kubeadm.go:322] 
	I1128 00:48:19.135692   45580 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:19.135700   45580 kubeadm.go:322] 
	I1128 00:48:19.135798   45580 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:19.135807   45580 kubeadm.go:322] 
	I1128 00:48:19.135840   45580 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:19.135916   45580 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:19.135987   45580 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:19.135996   45580 kubeadm.go:322] 
	I1128 00:48:19.136074   45580 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:19.136084   45580 kubeadm.go:322] 
	I1128 00:48:19.136153   45580 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:19.136161   45580 kubeadm.go:322] 
	I1128 00:48:19.136231   45580 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:19.136329   45580 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:19.136439   45580 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:19.136448   45580 kubeadm.go:322] 
	I1128 00:48:19.136552   45580 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:19.136662   45580 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:19.136674   45580 kubeadm.go:322] 
	I1128 00:48:19.136766   45580 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.136878   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:19.136907   45580 kubeadm.go:322] 	--control-plane 
	I1128 00:48:19.136913   45580 kubeadm.go:322] 
	I1128 00:48:19.136986   45580 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:19.136998   45580 kubeadm.go:322] 
	I1128 00:48:19.137097   45580 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.137259   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:19.137282   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:48:19.137290   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:19.138890   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:18.126502   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.131785   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:18.410659   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.910338   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.140172   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:19.160540   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:19.224333   45580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:19.224409   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.224455   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=embed-certs-304541 minikube.k8s.io/updated_at=2023_11_28T00_48_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.301346   45580 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:19.544274   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.656215   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.257645   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.757476   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.257246   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.757278   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.256655   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.757282   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.415051   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.914901   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.627184   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:24.627388   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.127311   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.409417   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:25.909086   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.257594   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:23.757135   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.257396   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.757508   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.257426   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.756605   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.256768   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.756656   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.256783   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.756856   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.414964   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.415763   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:28.257005   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:28.756875   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.256833   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.757261   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.257313   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.756918   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.257535   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.757356   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.917284   45580 kubeadm.go:1081] duration metric: took 12.692941702s to wait for elevateKubeSystemPrivileges.
	I1128 00:48:31.917326   45580 kubeadm.go:406] StartCluster complete in 5m7.933075195s
	I1128 00:48:31.917353   45580 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.917430   45580 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:48:31.919940   45580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.920855   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:48:31.921063   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:48:31.921037   45580 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:48:31.921110   45580 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-304541"
	I1128 00:48:31.921123   45580 addons.go:69] Setting default-storageclass=true in profile "embed-certs-304541"
	I1128 00:48:31.921143   45580 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-304541"
	I1128 00:48:31.921148   45580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-304541"
	W1128 00:48:31.921152   45580 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:48:31.921116   45580 addons.go:69] Setting metrics-server=true in profile "embed-certs-304541"
	I1128 00:48:31.921213   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921220   45580 addons.go:231] Setting addon metrics-server=true in "embed-certs-304541"
	W1128 00:48:31.921229   45580 addons.go:240] addon metrics-server should already be in state true
	I1128 00:48:31.921265   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921531   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921545   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921578   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921584   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921594   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921605   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.941345   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I1128 00:48:31.941374   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I1128 00:48:31.941359   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I1128 00:48:31.942009   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942040   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942449   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942460   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942477   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942488   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942549   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942937   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942955   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.943129   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943134   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943300   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943646   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.943671   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.943774   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.944439   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.944470   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.947579   45580 addons.go:231] Setting addon default-storageclass=true in "embed-certs-304541"
	W1128 00:48:31.947605   45580 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:48:31.947635   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.948083   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.948114   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.964906   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1128 00:48:31.964942   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1128 00:48:31.966157   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966261   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966778   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966795   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.966980   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966999   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.967444   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967481   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I1128 00:48:31.967447   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967636   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968331   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.968434   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968812   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.968830   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.969729   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972519   45580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:48:31.970271   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972982   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.974461   45580 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:31.974479   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:48:31.974498   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.976187   45580 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:48:31.974991   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.977660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.977907   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:48:31.977925   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:48:31.977943   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.978001   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.978243   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.978264   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.978506   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.978727   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.978954   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.979170   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.980878   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981226   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.981262   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981399   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.981571   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.981690   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.981810   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.997812   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I1128 00:48:31.998404   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.998989   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.999016   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.999427   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.999652   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:32.001212   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:32.001482   45580 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.001496   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:48:32.001513   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:32.002981   45580 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-304541" context rescaled to 1 replicas
	I1128 00:48:32.003019   45580 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:48:32.005961   45580 out.go:177] * Verifying Kubernetes components...
	I1128 00:48:29.127403   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:31.127830   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.911587   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.411923   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.004640   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.005211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:32.007586   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:32.007585   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:32.007700   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.007722   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:32.007894   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:32.008049   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:32.213297   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:48:32.213322   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:48:32.255646   45580 node_ready.go:35] waiting up to 6m0s for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.255743   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:48:32.268542   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.270044   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:48:32.270066   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:48:32.304458   45580 node_ready.go:49] node "embed-certs-304541" has status "Ready":"True"
	I1128 00:48:32.304486   45580 node_ready.go:38] duration metric: took 48.802082ms waiting for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.304498   45580 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:32.320550   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:32.437814   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:32.437852   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:48:32.462274   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:32.541622   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:29.418692   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.455152   45815 pod_ready.go:81] duration metric: took 4m0.000108261s waiting for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:30.455199   45815 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:30.455216   45815 pod_ready.go:38] duration metric: took 4m12.906382743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:30.455251   45815 kubeadm.go:640] restartCluster took 4m33.513232005s
	W1128 00:48:30.455312   45815 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:48:30.455356   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:48:34.327113   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.071322786s)
	I1128 00:48:34.327155   45580 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 00:48:34.342711   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.074127133s)
	I1128 00:48:34.342776   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.342791   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.343284   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343328   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.343339   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.343348   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343581   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343598   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.366719   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.366754   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.367052   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.367104   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.367119   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.467705   45580 pod_ready.go:102] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.935662   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473338078s)
	I1128 00:48:34.935745   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.935814   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936143   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.936184   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936193   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.936203   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.936211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936435   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936482   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977248   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.435573064s)
	I1128 00:48:34.977318   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977345   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.977738   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.977785   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.977806   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977824   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.979823   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.979823   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.979849   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.979860   45580 addons.go:467] Verifying addon metrics-server=true in "embed-certs-304541"
	I1128 00:48:34.981768   45580 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:48:33.129597   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.129880   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.912875   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.411225   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.983440   45580 addons.go:502] enable addons completed in 3.062399778s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:48:36.495977   45580 pod_ready.go:92] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.496002   45580 pod_ready.go:81] duration metric: took 4.175421265s waiting for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.496012   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508269   45580 pod_ready.go:92] pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.508293   45580 pod_ready.go:81] duration metric: took 12.274473ms waiting for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508302   45580 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515826   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.515855   45580 pod_ready.go:81] duration metric: took 7.545794ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515873   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523206   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.523271   45580 pod_ready.go:81] duration metric: took 7.388614ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523286   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529859   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.529881   45580 pod_ready.go:81] duration metric: took 6.58575ms waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529889   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857435   45580 pod_ready.go:92] pod "kube-proxy-w5ct2" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.857467   45580 pod_ready.go:81] duration metric: took 327.570428ms waiting for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857481   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257433   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:37.257455   45580 pod_ready.go:81] duration metric: took 399.966903ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257462   45580 pod_ready.go:38] duration metric: took 4.952954771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:37.257476   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:37.257523   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:37.275627   45580 api_server.go:72] duration metric: took 5.272574466s to wait for apiserver process to appear ...
	I1128 00:48:37.275656   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:37.275673   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:48:37.283884   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:48:37.285716   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:37.285744   45580 api_server.go:131] duration metric: took 10.080776ms to wait for apiserver health ...
	I1128 00:48:37.285766   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:37.460530   45580 system_pods.go:59] 9 kube-system pods found
	I1128 00:48:37.460555   45580 system_pods.go:61] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.460560   45580 system_pods.go:61] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.460563   45580 system_pods.go:61] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.460568   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.460572   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.460575   45580 system_pods.go:61] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.460579   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.460585   45580 system_pods.go:61] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.460589   45580 system_pods.go:61] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.460597   45580 system_pods.go:74] duration metric: took 174.824783ms to wait for pod list to return data ...
	I1128 00:48:37.460619   45580 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:37.656404   45580 default_sa.go:45] found service account: "default"
	I1128 00:48:37.656431   45580 default_sa.go:55] duration metric: took 195.805836ms for default service account to be created ...
	I1128 00:48:37.656444   45580 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:37.861049   45580 system_pods.go:86] 9 kube-system pods found
	I1128 00:48:37.861086   45580 system_pods.go:89] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.861095   45580 system_pods.go:89] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.861101   45580 system_pods.go:89] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.861108   45580 system_pods.go:89] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.861116   45580 system_pods.go:89] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.861122   45580 system_pods.go:89] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.861128   45580 system_pods.go:89] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.861140   45580 system_pods.go:89] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.861157   45580 system_pods.go:89] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.861171   45580 system_pods.go:126] duration metric: took 204.720501ms to wait for k8s-apps to be running ...
	I1128 00:48:37.861187   45580 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:37.861241   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:37.875344   45580 system_svc.go:56] duration metric: took 14.150294ms WaitForService to wait for kubelet.
	I1128 00:48:37.875380   45580 kubeadm.go:581] duration metric: took 5.872335245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:37.875407   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:38.057075   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:38.057106   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:38.057117   45580 node_conditions.go:105] duration metric: took 181.705529ms to run NodePressure ...
	I1128 00:48:38.057127   45580 start.go:228] waiting for startup goroutines ...
	I1128 00:48:38.057133   45580 start.go:233] waiting for cluster config update ...
	I1128 00:48:38.057141   45580 start.go:242] writing updated cluster config ...
	I1128 00:48:38.057366   45580 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:38.107014   45580 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:38.109071   45580 out.go:177] * Done! kubectl is now configured to use "embed-certs-304541" cluster and "default" namespace by default
	I1128 00:48:37.626062   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:38.819130   46126 pod_ready.go:81] duration metric: took 4m0.000531461s waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:38.819159   46126 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:38.819168   46126 pod_ready.go:38] duration metric: took 4m5.602220781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:38.819189   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:38.819216   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:38.819269   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:38.882052   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:38.882075   46126 cri.go:89] found id: ""
	I1128 00:48:38.882084   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:38.882143   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.886688   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:38.886751   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:38.926163   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:38.926190   46126 cri.go:89] found id: ""
	I1128 00:48:38.926197   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:38.926259   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.930505   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:38.930558   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:38.979793   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:38.979816   46126 cri.go:89] found id: ""
	I1128 00:48:38.979823   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:38.979876   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.984146   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:38.984244   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:39.033485   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:39.033509   46126 cri.go:89] found id: ""
	I1128 00:48:39.033519   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:39.033575   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.038977   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:39.039038   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:39.079669   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:39.079697   46126 cri.go:89] found id: ""
	I1128 00:48:39.079707   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:39.079767   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.084447   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:39.084515   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:39.121494   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:39.121523   46126 cri.go:89] found id: ""
	I1128 00:48:39.121533   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:39.121594   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.126495   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:39.126554   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:39.168822   46126 cri.go:89] found id: ""
	I1128 00:48:39.168851   46126 logs.go:284] 0 containers: []
	W1128 00:48:39.168862   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:39.168869   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:39.168924   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:39.213834   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.213859   46126 cri.go:89] found id: ""
	I1128 00:48:39.213869   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:39.213914   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.218746   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:39.218772   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:39.232098   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:39.232127   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:39.373641   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:39.373674   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:39.451311   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:39.451349   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.498219   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:39.498247   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:39.952276   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:39.952314   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:40.008385   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:40.008425   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:40.052409   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:40.052443   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:40.092943   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:40.092978   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:40.135490   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:40.135520   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:40.189756   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:40.189793   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:40.242615   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:40.242643   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:37.415898   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:39.910954   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:42.802428   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:42.818606   46126 api_server.go:72] duration metric: took 4m14.508070703s to wait for apiserver process to appear ...
	I1128 00:48:42.818632   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:42.818667   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:42.818721   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:42.872566   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:42.872603   46126 cri.go:89] found id: ""
	I1128 00:48:42.872613   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:42.872675   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.878165   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:42.878232   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:42.924667   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:42.924689   46126 cri.go:89] found id: ""
	I1128 00:48:42.924699   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:42.924772   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.929748   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:42.929809   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:42.977787   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:42.977815   46126 cri.go:89] found id: ""
	I1128 00:48:42.977825   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:42.977887   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.982991   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:42.983071   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:43.032835   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.032866   46126 cri.go:89] found id: ""
	I1128 00:48:43.032876   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:43.032933   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.038635   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:43.038711   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:43.084051   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.084080   46126 cri.go:89] found id: ""
	I1128 00:48:43.084090   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:43.084161   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.088908   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:43.088976   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:43.130640   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.130666   46126 cri.go:89] found id: ""
	I1128 00:48:43.130676   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:43.130738   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.135354   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:43.135434   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:43.179655   46126 cri.go:89] found id: ""
	I1128 00:48:43.179690   46126 logs.go:284] 0 containers: []
	W1128 00:48:43.179699   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:43.179705   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:43.179770   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:43.228309   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.228335   46126 cri.go:89] found id: ""
	I1128 00:48:43.228343   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:43.228404   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.233343   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:43.233375   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:43.247396   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:43.247430   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:43.386131   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:43.386181   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:43.463228   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:43.463275   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.519469   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:43.519511   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.581402   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:43.581437   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:43.641804   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:43.641844   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:43.707768   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:43.707807   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:43.779636   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:43.779673   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:43.822939   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:43.822972   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.869304   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:43.869344   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.917500   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:43.917528   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:46.886551   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:48:46.892696   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:48:46.894400   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:46.894424   46126 api_server.go:131] duration metric: took 4.075784232s to wait for apiserver health ...
	I1128 00:48:46.894433   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:46.894455   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:46.894492   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:46.939259   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:46.939291   46126 cri.go:89] found id: ""
	I1128 00:48:46.939302   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:46.939364   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.946934   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:46.947012   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:46.989896   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:46.989920   46126 cri.go:89] found id: ""
	I1128 00:48:46.989930   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:46.989988   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.994923   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:46.994994   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:47.040298   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.040330   46126 cri.go:89] found id: ""
	I1128 00:48:47.040339   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:47.040396   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.045041   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:47.045113   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:47.093254   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.093282   46126 cri.go:89] found id: ""
	I1128 00:48:47.093290   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:47.093345   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.097856   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:47.097916   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:47.150763   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.150790   46126 cri.go:89] found id: ""
	I1128 00:48:47.150800   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:47.150855   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.155272   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:47.155348   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:47.203549   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.203586   46126 cri.go:89] found id: ""
	I1128 00:48:47.203600   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:47.203670   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.209313   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:47.209384   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:42.410241   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:44.909607   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:46.893894   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.438515297s)
	I1128 00:48:46.893965   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:46.909967   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:46.919457   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:46.928580   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:46.928629   45815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:46.989655   45815 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 00:48:46.989772   45815 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:47.162717   45815 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:47.162868   45815 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:47.163002   45815 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:47.453392   45815 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:47.455125   45815 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:47.455291   45815 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:47.455388   45815 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:47.455530   45815 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:47.455605   45815 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:47.456116   45815 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:47.456786   45815 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:47.457320   45815 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:47.457814   45815 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:47.458228   45815 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:47.458584   45815 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:47.458984   45815 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:47.459080   45815 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:47.654823   45815 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:47.858053   45815 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 00:48:48.006981   45815 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:48.256244   45815 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:48.381440   45815 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:48.381976   45815 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:48.384696   45815 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:48.386824   45815 out.go:204]   - Booting up control plane ...
	I1128 00:48:48.386943   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:48.387057   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:48.387155   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:48.404036   45815 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:48.408139   45815 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:48.408584   45815 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:48.539731   45815 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:47.259312   46126 cri.go:89] found id: ""
	I1128 00:48:47.259343   46126 logs.go:284] 0 containers: []
	W1128 00:48:47.259353   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:47.259361   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:47.259421   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:47.308650   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.308681   46126 cri.go:89] found id: ""
	I1128 00:48:47.308692   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:47.308764   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.313702   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:47.313727   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:47.327753   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:47.327788   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:47.490493   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:47.490525   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:47.554064   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:47.554097   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.604401   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:47.604433   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.643173   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:47.643211   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.707400   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:47.707432   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.763831   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:47.763860   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:47.817244   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:47.817278   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:47.872462   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:47.872499   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:47.930695   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:47.930729   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.987718   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:47.987748   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:50.856470   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:48:50.856510   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.856518   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.856525   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.856533   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.856539   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.856545   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.856558   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.856571   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.856579   46126 system_pods.go:74] duration metric: took 3.962140088s to wait for pod list to return data ...
	I1128 00:48:50.856589   46126 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:50.859308   46126 default_sa.go:45] found service account: "default"
	I1128 00:48:50.859338   46126 default_sa.go:55] duration metric: took 2.741136ms for default service account to be created ...
	I1128 00:48:50.859347   46126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:50.865347   46126 system_pods.go:86] 8 kube-system pods found
	I1128 00:48:50.865371   46126 system_pods.go:89] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.865377   46126 system_pods.go:89] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.865382   46126 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.865387   46126 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.865391   46126 system_pods.go:89] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.865395   46126 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.865405   46126 system_pods.go:89] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.865413   46126 system_pods.go:89] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.865425   46126 system_pods.go:126] duration metric: took 6.071837ms to wait for k8s-apps to be running ...
	I1128 00:48:50.865441   46126 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:50.865490   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:50.882729   46126 system_svc.go:56] duration metric: took 17.277766ms WaitForService to wait for kubelet.
	I1128 00:48:50.882767   46126 kubeadm.go:581] duration metric: took 4m22.572235871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:50.882796   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:50.886638   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:50.886671   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:50.886684   46126 node_conditions.go:105] duration metric: took 3.881703ms to run NodePressure ...
	I1128 00:48:50.886699   46126 start.go:228] waiting for startup goroutines ...
	I1128 00:48:50.886712   46126 start.go:233] waiting for cluster config update ...
	I1128 00:48:50.886725   46126 start.go:242] writing updated cluster config ...
	I1128 00:48:50.886995   46126 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:50.947562   46126 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:50.949119   46126 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-488423" cluster and "default" namespace by default
	I1128 00:48:47.419653   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:49.909410   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:51.909739   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:53.910387   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.408786   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.542000   45815 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002009 seconds
	I1128 00:48:56.567203   45815 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:56.583239   45815 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:57.114661   45815 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:57.114917   45815 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-473615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:57.633030   45815 kubeadm.go:322] [bootstrap-token] Using token: vz7ey4.v2qfoncp2ok7nh54
	I1128 00:48:57.634835   45815 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:57.634961   45815 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:57.640535   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:57.653911   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:57.658740   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:57.662927   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:57.667238   45815 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:57.688281   45815 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:57.949630   45815 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:58.055744   45815 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:58.057024   45815 kubeadm.go:322] 
	I1128 00:48:58.057159   45815 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:58.057179   45815 kubeadm.go:322] 
	I1128 00:48:58.057290   45815 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:58.057310   45815 kubeadm.go:322] 
	I1128 00:48:58.057343   45815 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:58.057431   45815 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:58.057518   45815 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:58.057536   45815 kubeadm.go:322] 
	I1128 00:48:58.057601   45815 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:58.057609   45815 kubeadm.go:322] 
	I1128 00:48:58.057673   45815 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:58.057678   45815 kubeadm.go:322] 
	I1128 00:48:58.057719   45815 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:58.057787   45815 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:58.057841   45815 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:58.057844   45815 kubeadm.go:322] 
	I1128 00:48:58.057921   45815 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:58.057987   45815 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:58.057991   45815 kubeadm.go:322] 
	I1128 00:48:58.058062   45815 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058148   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:58.058183   45815 kubeadm.go:322] 	--control-plane 
	I1128 00:48:58.058198   45815 kubeadm.go:322] 
	I1128 00:48:58.058266   45815 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:58.058272   45815 kubeadm.go:322] 
	I1128 00:48:58.058347   45815 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058449   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:58.059375   45815 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:58.059404   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:48:58.059415   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:58.061524   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:58.062981   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:58.121061   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:58.143978   45815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:58.144060   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.144068   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=no-preload-473615 minikube.k8s.io/updated_at=2023_11_28T00_48_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.495592   45815 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:58.495756   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.590073   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.412254   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:00.912329   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:59.189174   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:59.688440   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.189285   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.688724   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.189197   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.688512   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.189219   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.689235   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.189405   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.689243   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.414190   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:05.909164   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:04.188645   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:04.688928   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.189330   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.689126   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.189257   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.688476   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.189386   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.689051   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.188961   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.689080   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.188591   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.688502   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.188492   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.303728   45815 kubeadm.go:1081] duration metric: took 12.159747313s to wait for elevateKubeSystemPrivileges.
	I1128 00:49:10.303773   45815 kubeadm.go:406] StartCluster complete in 5m13.413969558s
	I1128 00:49:10.303794   45815 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.303880   45815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:49:10.306274   45815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.306559   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:49:10.306678   45815 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:49:10.306764   45815 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473615"
	I1128 00:49:10.306786   45815 addons.go:231] Setting addon storage-provisioner=true in "no-preload-473615"
	W1128 00:49:10.306799   45815 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:49:10.306822   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:49:10.306844   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.306903   45815 addons.go:69] Setting default-storageclass=true in profile "no-preload-473615"
	I1128 00:49:10.306924   45815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473615"
	I1128 00:49:10.307065   45815 addons.go:69] Setting metrics-server=true in profile "no-preload-473615"
	I1128 00:49:10.307089   45815 addons.go:231] Setting addon metrics-server=true in "no-preload-473615"
	W1128 00:49:10.307097   45815 addons.go:240] addon metrics-server should already be in state true
	I1128 00:49:10.307140   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.307283   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307284   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307366   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307313   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307600   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307650   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.323788   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1128 00:49:10.324333   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.324915   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.324940   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.325212   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I1128 00:49:10.325655   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.325825   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326138   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.326156   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.326346   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326375   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.326504   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326968   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326991   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.330263   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I1128 00:49:10.331124   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.331538   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.331559   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.331951   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.332131   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.335360   45815 addons.go:231] Setting addon default-storageclass=true in "no-preload-473615"
	W1128 00:49:10.335378   45815 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:49:10.335405   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.335685   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.335715   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.346750   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1128 00:49:10.346822   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I1128 00:49:10.347279   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347400   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347703   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347731   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347906   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347919   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347983   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348096   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.348232   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348429   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.350025   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.352544   45815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:49:10.350506   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.355541   45815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:49:10.354491   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:49:10.356963   45815 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.356980   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:49:10.356993   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.355570   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:49:10.357068   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.356139   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1128 00:49:10.356295   45815 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473615" context rescaled to 1 replicas
	I1128 00:49:10.357149   45815 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:49:10.358543   45815 out.go:177] * Verifying Kubernetes components...
	I1128 00:49:10.359926   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:10.357719   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.360555   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.360575   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.361020   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.361318   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361551   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.361574   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361736   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.361938   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.362037   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362129   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.362295   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.362317   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.362381   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.362676   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.362699   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362961   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.363188   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.363360   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.363499   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.381194   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1128 00:49:10.381543   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.382012   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.382032   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.382399   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.382584   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.384269   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.384500   45815 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.384513   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:49:10.384527   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.387448   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388000   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.388027   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388169   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.388335   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.388477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.388578   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.513157   45815 node_ready.go:35] waiting up to 6m0s for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.513251   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:49:10.546158   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.566225   45815 node_ready.go:49] node "no-preload-473615" has status "Ready":"True"
	I1128 00:49:10.566248   45815 node_ready.go:38] duration metric: took 53.063342ms waiting for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.566259   45815 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:10.589374   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:49:10.589400   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:49:10.608085   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.657717   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:49:10.657746   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:49:10.693300   45815 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.745796   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.745821   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:49:10.820139   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.848411   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:10.848444   45815 pod_ready.go:81] duration metric: took 155.116855ms waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.848459   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035904   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.035929   45815 pod_ready.go:81] duration metric: took 187.461745ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035941   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.269000   45815 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1128 00:49:11.634167   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.087967346s)
	I1128 00:49:11.634213   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026096699s)
	I1128 00:49:11.634226   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634239   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634250   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634272   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634578   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634621   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634637   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634639   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634649   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634650   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634656   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634660   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634595   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634942   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634958   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634986   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635009   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634989   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635049   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.657473   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.657495   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.657814   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.657828   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.758491   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.758514   45815 pod_ready.go:81] duration metric: took 722.565796ms waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.758525   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:12.084449   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.264259029s)
	I1128 00:49:12.084510   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084524   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.084846   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.084865   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.084875   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084870   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.084885   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.085142   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.085152   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.085164   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.085174   45815 addons.go:467] Verifying addon metrics-server=true in "no-preload-473615"
	I1128 00:49:12.087081   45815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:49:08.409321   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:10.909836   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:12.088572   45815 addons.go:502] enable addons completed in 1.781896775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:49:13.830651   45815 pod_ready.go:102] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:14.830780   45815 pod_ready.go:92] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.830805   45815 pod_ready.go:81] duration metric: took 3.072274458s waiting for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.830815   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836248   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.836266   45815 pod_ready.go:81] duration metric: took 5.444378ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836273   45815 pod_ready.go:38] duration metric: took 4.270002588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:14.836288   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:49:14.836329   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:49:14.860322   45815 api_server.go:72] duration metric: took 4.503144983s to wait for apiserver process to appear ...
	I1128 00:49:14.860354   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:49:14.860375   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:49:14.866977   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:49:14.868294   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:49:14.868318   45815 api_server.go:131] duration metric: took 7.955565ms to wait for apiserver health ...
	I1128 00:49:14.868328   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:49:14.875943   45815 system_pods.go:59] 8 kube-system pods found
	I1128 00:49:14.875972   45815 system_pods.go:61] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:14.875979   45815 system_pods.go:61] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:14.875986   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:14.875993   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:14.875999   45815 system_pods.go:61] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:14.876005   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:14.876019   45815 system_pods.go:61] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:14.876031   45815 system_pods.go:61] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:14.876042   45815 system_pods.go:74] duration metric: took 7.70749ms to wait for pod list to return data ...
	I1128 00:49:14.876058   45815 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:49:14.918080   45815 default_sa.go:45] found service account: "default"
	I1128 00:49:14.918107   45815 default_sa.go:55] duration metric: took 42.036279ms for default service account to be created ...
	I1128 00:49:14.918119   45815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:49:15.120338   45815 system_pods.go:86] 8 kube-system pods found
	I1128 00:49:15.120368   45815 system_pods.go:89] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:15.120376   45815 system_pods.go:89] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:15.120383   45815 system_pods.go:89] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:15.120390   45815 system_pods.go:89] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:15.120395   45815 system_pods.go:89] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:15.120401   45815 system_pods.go:89] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:15.120413   45815 system_pods.go:89] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:15.120420   45815 system_pods.go:89] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:15.120437   45815 system_pods.go:126] duration metric: took 202.310611ms to wait for k8s-apps to be running ...
	I1128 00:49:15.120452   45815 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:49:15.120501   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:15.134858   45815 system_svc.go:56] duration metric: took 14.396652ms WaitForService to wait for kubelet.
	I1128 00:49:15.134886   45815 kubeadm.go:581] duration metric: took 4.777716544s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:49:15.134902   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:49:15.318344   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:49:15.318370   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:49:15.318380   45815 node_conditions.go:105] duration metric: took 183.473974ms to run NodePressure ...
	I1128 00:49:15.318390   45815 start.go:228] waiting for startup goroutines ...
	I1128 00:49:15.318396   45815 start.go:233] waiting for cluster config update ...
	I1128 00:49:15.318405   45815 start.go:242] writing updated cluster config ...
	I1128 00:49:15.318651   45815 ssh_runner.go:195] Run: rm -f paused
	I1128 00:49:15.368036   45815 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 00:49:15.369853   45815 out.go:177] * Done! kubectl is now configured to use "no-preload-473615" cluster and "default" namespace by default
	I1128 00:49:12.909910   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:15.420062   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:17.421038   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:19.909444   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:21.910293   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:24.412962   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:26.908733   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:28.910353   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:31.104114   45269 pod_ready.go:81] duration metric: took 4m0.000750315s waiting for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	E1128 00:49:31.104164   45269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:49:31.104219   45269 pod_ready.go:38] duration metric: took 4m1.201800344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:31.104258   45269 kubeadm.go:640] restartCluster took 5m3.38216869s
	W1128 00:49:31.104338   45269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:49:31.104371   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:49:35.883236   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.778829992s)
	I1128 00:49:35.883312   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:35.898846   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:49:35.910716   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:49:35.921838   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:49:35.921883   45269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 00:49:35.987683   45269 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 00:49:35.987889   45269 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:49:36.153771   45269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:49:36.153926   45269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:49:36.154056   45269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:49:36.387112   45269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:49:36.387236   45269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:49:36.394929   45269 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 00:49:36.523951   45269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:49:36.526180   45269 out.go:204]   - Generating certificates and keys ...
	I1128 00:49:36.526284   45269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:49:36.526378   45269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:49:36.526508   45269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:49:36.526603   45269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:49:36.526723   45269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:49:36.526807   45269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:49:36.526928   45269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:49:36.527026   45269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:49:36.527127   45269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:49:36.527671   45269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:49:36.527734   45269 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:49:36.527807   45269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:49:36.966756   45269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:49:37.138717   45269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:49:37.307916   45269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:49:37.374115   45269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:49:37.375393   45269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:49:37.377224   45269 out.go:204]   - Booting up control plane ...
	I1128 00:49:37.377338   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:49:37.381887   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:49:37.383114   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:49:37.384032   45269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:49:37.387460   45269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:49:47.893342   45269 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504508 seconds
	I1128 00:49:47.893497   45269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:49:47.911409   45269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:49:48.437988   45269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:49:48.438226   45269 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-732472 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 00:49:48.947631   45269 kubeadm.go:322] [bootstrap-token] Using token: g2kx2b.r3qu6fui94rrmu2m
	I1128 00:49:48.949581   45269 out.go:204]   - Configuring RBAC rules ...
	I1128 00:49:48.949746   45269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:49:48.960004   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:49:48.969068   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:49:48.973998   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:49:48.982331   45269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:49:49.099721   45269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:49:49.367382   45269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:49:49.369069   45269 kubeadm.go:322] 
	I1128 00:49:49.369159   45269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:49:49.369196   45269 kubeadm.go:322] 
	I1128 00:49:49.369325   45269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:49:49.369339   45269 kubeadm.go:322] 
	I1128 00:49:49.369383   45269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:49:49.369449   45269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:49:49.369519   45269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:49:49.369541   45269 kubeadm.go:322] 
	I1128 00:49:49.369619   45269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:49:49.369725   45269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:49:49.369822   45269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:49:49.369839   45269 kubeadm.go:322] 
	I1128 00:49:49.369975   45269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 00:49:49.370080   45269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:49:49.370092   45269 kubeadm.go:322] 
	I1128 00:49:49.370202   45269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370371   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:49:49.370419   45269 kubeadm.go:322]     --control-plane 	  
	I1128 00:49:49.370432   45269 kubeadm.go:322] 
	I1128 00:49:49.370515   45269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:49:49.370527   45269 kubeadm.go:322] 
	I1128 00:49:49.370639   45269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370783   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:49:49.371106   45269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:49:49.371134   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:49:49.371148   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:49:49.373008   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:49:49.374371   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:49:49.384861   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:49:49.402517   45269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:49:49.402582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.402598   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=old-k8s-version-732472 minikube.k8s.io/updated_at=2023_11_28T00_49_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.441523   45269 ops.go:34] apiserver oom_adj: -16
	I1128 00:49:49.674343   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.796920   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.420537   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.920042   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.420533   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.920538   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.420730   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.920078   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.420670   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.920876   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.420798   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.920702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.420180   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.920033   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.420702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.920106   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.420244   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.920637   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.420226   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.920874   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.420228   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.920070   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.420845   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.920883   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.420977   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.920275   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.420097   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.920582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.420001   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.919906   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.420071   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.580992   45269 kubeadm.go:1081] duration metric: took 15.178468662s to wait for elevateKubeSystemPrivileges.
	I1128 00:50:04.581023   45269 kubeadm.go:406] StartCluster complete in 5m36.912120738s
	I1128 00:50:04.581042   45269 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.581125   45269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:50:04.582704   45269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.582966   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:50:04.583000   45269 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:50:04.583077   45269 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583105   45269 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-732472"
	W1128 00:50:04.583116   45269 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:50:04.583192   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583206   45269 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583227   45269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-732472"
	I1128 00:50:04.583540   45269 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583565   45269 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-732472"
	W1128 00:50:04.583573   45269 addons.go:240] addon metrics-server should already be in state true
	I1128 00:50:04.583609   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583635   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583640   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583676   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583643   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583193   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:50:04.584015   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.584069   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.602419   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I1128 00:50:04.602558   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I1128 00:50:04.602646   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I1128 00:50:04.603020   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603118   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603196   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603571   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603572   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603597   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603611   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603729   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603753   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603939   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.603973   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604086   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.604489   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604521   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.604617   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604646   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.608900   45269 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-732472"
	W1128 00:50:04.608925   45269 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:50:04.608953   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.611555   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.611628   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.622409   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33595
	I1128 00:50:04.622446   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1128 00:50:04.622876   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623000   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623394   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623424   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623534   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623567   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623886   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624365   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624368   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.624556   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.626412   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.626443   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.629006   45269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:50:04.630723   45269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:50:04.632378   45269 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.632395   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:50:04.632409   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.630641   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:50:04.632467   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:50:04.632479   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.632126   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I1128 00:50:04.633062   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.633666   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.633692   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.634447   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.635020   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.635053   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.636332   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636387   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636733   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636772   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636795   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636830   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636952   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637085   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637132   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637245   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637296   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637413   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637448   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.637594   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.651941   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1128 00:50:04.652604   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.653192   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.653222   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.653677   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.653838   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.655532   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.655848   45269 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.655868   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:50:04.655890   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.658852   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659252   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.659280   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659426   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.659602   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.659971   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.660096   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	W1128 00:50:04.792826   45269 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-732472" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1128 00:50:04.792863   45269 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1128 00:50:04.792890   45269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:50:04.795799   45269 out.go:177] * Verifying Kubernetes components...
	I1128 00:50:04.797469   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:50:04.870889   45269 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.871024   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:50:04.888333   45269 node_ready.go:49] node "old-k8s-version-732472" has status "Ready":"True"
	I1128 00:50:04.888359   45269 node_ready.go:38] duration metric: took 17.44205ms waiting for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.888372   45269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:04.899414   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.902681   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:04.904708   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:50:04.904734   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:50:04.947930   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.977094   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:50:04.977123   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:50:05.195712   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:05.195795   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:50:05.292058   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:06.383144   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.512083846s)
	I1128 00:50:06.383170   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.483727542s)
	I1128 00:50:06.383180   45269 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 00:50:06.383208   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383572   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383599   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383608   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383606   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.383618   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383835   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383851   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383870   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.423407   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.423447   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.423758   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.423783   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.423799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.678261   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.73029562s)
	I1128 00:50:06.678312   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678326   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678640   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678655   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.678663   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678672   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678927   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678955   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762082   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46997729s)
	I1128 00:50:06.762140   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762160   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762538   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762557   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762569   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762579   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762599   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.762815   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762830   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762840   45269 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-732472"
	I1128 00:50:06.765825   45269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:50:06.767637   45269 addons.go:502] enable addons completed in 2.184637132s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:50:06.959495   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:08.961160   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:11.459984   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:12.959294   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.959317   45269 pod_ready.go:81] duration metric: took 8.056612005s waiting for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.959326   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973244   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.973268   45269 pod_ready.go:81] duration metric: took 13.936307ms waiting for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973278   45269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980471   45269 pod_ready.go:92] pod "kube-proxy-88chq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.980489   45269 pod_ready.go:81] duration metric: took 7.20414ms waiting for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980496   45269 pod_ready.go:38] duration metric: took 8.092113593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:12.980511   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:50:12.980554   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:50:12.996604   45269 api_server.go:72] duration metric: took 8.203675443s to wait for apiserver process to appear ...
	I1128 00:50:12.996645   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:50:12.996670   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:50:13.006987   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:50:13.007986   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:50:13.008003   45269 api_server.go:131] duration metric: took 11.352257ms to wait for apiserver health ...
	I1128 00:50:13.008010   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:50:13.013658   45269 system_pods.go:59] 5 kube-system pods found
	I1128 00:50:13.013677   45269 system_pods.go:61] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.013682   45269 system_pods.go:61] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.013686   45269 system_pods.go:61] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.013693   45269 system_pods.go:61] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.013697   45269 system_pods.go:61] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.013703   45269 system_pods.go:74] duration metric: took 5.688575ms to wait for pod list to return data ...
	I1128 00:50:13.013710   45269 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:50:13.016210   45269 default_sa.go:45] found service account: "default"
	I1128 00:50:13.016228   45269 default_sa.go:55] duration metric: took 2.513069ms for default service account to be created ...
	I1128 00:50:13.016234   45269 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:50:13.020464   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.020488   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.020496   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.020502   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.020513   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.020522   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.020544   45269 retry.go:31] will retry after 244.092512ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.270858   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.270893   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.270901   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.270907   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.270918   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.270926   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.270946   45269 retry.go:31] will retry after 311.602199ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.588013   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.588041   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.588047   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.588051   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.588057   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.588062   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.588076   45269 retry.go:31] will retry after 298.08088ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.891272   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.891302   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.891307   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.891311   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.891318   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.891323   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.891339   45269 retry.go:31] will retry after 474.390305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:14.371201   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:14.371230   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:14.371236   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:14.371241   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:14.371248   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:14.371253   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:14.371269   45269 retry.go:31] will retry after 719.510586ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.096817   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.096846   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.096851   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.096855   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.096862   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.096866   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.096881   45269 retry.go:31] will retry after 684.457384ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.786918   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.786947   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.786952   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.786956   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.786962   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.786967   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.786982   45269 retry.go:31] will retry after 721.543291ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:16.513230   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:16.513258   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:16.513263   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:16.513268   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:16.513275   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:16.513280   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:16.513296   45269 retry.go:31] will retry after 1.405502561s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:17.926572   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:17.926610   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:17.926619   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:17.926626   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:17.926636   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:17.926642   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:17.926662   45269 retry.go:31] will retry after 1.65088536s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:19.584099   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:19.584130   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:19.584136   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:19.584140   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:19.584147   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:19.584152   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:19.584168   45269 retry.go:31] will retry after 1.660488369s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:21.250659   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:21.250706   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:21.250714   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:21.250719   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:21.250729   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:21.250736   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:21.250757   45269 retry.go:31] will retry after 1.762203818s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:23.018771   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:23.018798   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:23.018804   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:23.018808   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:23.018815   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:23.018819   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:23.018837   45269 retry.go:31] will retry after 2.558255345s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:25.584363   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:25.584394   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:25.584402   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:25.584409   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:25.584417   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:25.584422   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:25.584446   45269 retry.go:31] will retry after 4.457632402s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:30.049343   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:30.049374   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:30.049381   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:30.049388   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:30.049398   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:30.049406   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:30.049426   45269 retry.go:31] will retry after 5.077489821s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:35.133974   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:35.134001   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:35.134006   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:35.134010   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:35.134022   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:35.134029   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:35.134048   45269 retry.go:31] will retry after 5.675627515s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:40.814779   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:40.814808   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:40.814814   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:40.814818   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:40.814825   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:40.814829   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:40.814846   45269 retry.go:31] will retry after 5.701774609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:46.524426   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:46.524467   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:46.524475   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:46.524482   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:46.524492   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:46.524499   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:46.524521   45269 retry.go:31] will retry after 7.322045517s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:53.852348   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:53.852378   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:53.852387   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:53.852394   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:53.852406   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:53.852413   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:53.852442   45269 retry.go:31] will retry after 12.532542473s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:06.392828   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:06.392858   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:06.392863   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:06.392872   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Pending
	I1128 00:51:06.392876   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Pending
	I1128 00:51:06.392882   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Pending
	I1128 00:51:06.392886   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:06.392889   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Pending
	I1128 00:51:06.392897   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:06.392901   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:06.392915   45269 retry.go:31] will retry after 10.519018157s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:16.918264   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:16.918303   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:16.918311   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:16.918319   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Running
	I1128 00:51:16.918326   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Running
	I1128 00:51:16.918333   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Running
	I1128 00:51:16.918340   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:16.918346   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Running
	I1128 00:51:16.918360   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:16.918375   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:16.918386   45269 system_pods.go:126] duration metric: took 1m3.902146285s to wait for k8s-apps to be running ...
	I1128 00:51:16.918398   45269 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:51:16.918445   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:51:16.937522   45269 system_svc.go:56] duration metric: took 19.116204ms WaitForService to wait for kubelet.
	I1128 00:51:16.937556   45269 kubeadm.go:581] duration metric: took 1m12.144633009s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:51:16.937577   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:51:16.941812   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:51:16.941838   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:51:16.941849   45269 node_conditions.go:105] duration metric: took 4.264769ms to run NodePressure ...
	I1128 00:51:16.941859   45269 start.go:228] waiting for startup goroutines ...
	I1128 00:51:16.941865   45269 start.go:233] waiting for cluster config update ...
	I1128 00:51:16.941874   45269 start.go:242] writing updated cluster config ...
	I1128 00:51:16.942150   45269 ssh_runner.go:195] Run: rm -f paused
	I1128 00:51:16.992567   45269 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 00:51:16.994677   45269 out.go:177] 
	W1128 00:51:16.996083   45269 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 00:51:16.997442   45269 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 00:51:16.998644   45269 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-732472" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:43:29 UTC, ends at Tue 2023-11-28 00:58:17 UTC. --
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.005253289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133097005231217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=26ebc6d0-d66f-40ab-96f2-e8b50139bb6b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.005914104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e9719cfc-2ddb-43ba-a005-0bf59540af2c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.005967507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e9719cfc-2ddb-43ba-a005-0bf59540af2c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.006201170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb,PodSandboxId:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701132553616756809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,},Annotations:map[string]string{io.kubernetes.container.hash: c96fec65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4,PodSandboxId:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701132553290577595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{io.kubernetes.container.hash: 54a3e66d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b,PodSandboxId:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701132552667379997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,},Annotations:map[string]string{io.kubernetes.container.hash: 2cea8ed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3,PodSandboxId:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701132530924570327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,},Anno
tations:map[string]string{io.kubernetes.container.hash: 14c398b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4,PodSandboxId:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701132530337556561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb01d80,},Annotations:map
[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba,PodSandboxId:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701132530037671548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5aa5271b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278,PodSandboxId:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701132529868536366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,},A
nnotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e9719cfc-2ddb-43ba-a005-0bf59540af2c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.049849148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51b656f0-fd10-401d-bd33-0d8ef8bc98a8 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.049903127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51b656f0-fd10-401d-bd33-0d8ef8bc98a8 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.051475039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8273e0b2-72b6-4266-8450-3ca7cfd698d0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.051857807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133097051843901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=8273e0b2-72b6-4266-8450-3ca7cfd698d0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.052548937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=52dc6d5e-5b71-407f-a4f7-0a17faf087af name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.052593642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=52dc6d5e-5b71-407f-a4f7-0a17faf087af name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.052769243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb,PodSandboxId:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701132553616756809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,},Annotations:map[string]string{io.kubernetes.container.hash: c96fec65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4,PodSandboxId:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701132553290577595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{io.kubernetes.container.hash: 54a3e66d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b,PodSandboxId:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701132552667379997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,},Annotations:map[string]string{io.kubernetes.container.hash: 2cea8ed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3,PodSandboxId:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701132530924570327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,},Anno
tations:map[string]string{io.kubernetes.container.hash: 14c398b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4,PodSandboxId:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701132530337556561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb01d80,},Annotations:map
[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba,PodSandboxId:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701132530037671548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5aa5271b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278,PodSandboxId:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701132529868536366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,},A
nnotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=52dc6d5e-5b71-407f-a4f7-0a17faf087af name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.098155451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b9bb8632-bd30-4854-a990-4025576c0ec6 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.098270676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b9bb8632-bd30-4854-a990-4025576c0ec6 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.099517868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=59a7d541-9a42-48a1-9911-c5891508ea86 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.099927326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133097099913797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=59a7d541-9a42-48a1-9911-c5891508ea86 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.100599022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f0539a1-7808-4697-9d3b-5a816e8124ff name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.100674005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f0539a1-7808-4697-9d3b-5a816e8124ff name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.100929203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb,PodSandboxId:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701132553616756809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,},Annotations:map[string]string{io.kubernetes.container.hash: c96fec65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4,PodSandboxId:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701132553290577595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{io.kubernetes.container.hash: 54a3e66d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b,PodSandboxId:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701132552667379997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,},Annotations:map[string]string{io.kubernetes.container.hash: 2cea8ed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3,PodSandboxId:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701132530924570327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,},Anno
tations:map[string]string{io.kubernetes.container.hash: 14c398b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4,PodSandboxId:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701132530337556561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb01d80,},Annotations:map
[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba,PodSandboxId:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701132530037671548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5aa5271b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278,PodSandboxId:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701132529868536366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,},A
nnotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f0539a1-7808-4697-9d3b-5a816e8124ff name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.137139719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5cfbbf20-183c-4b54-a381-f4833690e3a6 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.137202387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5cfbbf20-183c-4b54-a381-f4833690e3a6 name=/runtime.v1.RuntimeService/Version
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.139161484Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dde24e52-c24d-445d-a328-b3a51e902a89 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.139514289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133097139497646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=dde24e52-c24d-445d-a328-b3a51e902a89 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.140177894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=12e291f1-a7da-42ec-ae86-8805b1210100 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.140224842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=12e291f1-a7da-42ec-ae86-8805b1210100 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 00:58:17 no-preload-473615 crio[741]: time="2023-11-28 00:58:17.140380255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb,PodSandboxId:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701132553616756809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,},Annotations:map[string]string{io.kubernetes.container.hash: c96fec65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4,PodSandboxId:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701132553290577595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{io.kubernetes.container.hash: 54a3e66d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b,PodSandboxId:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701132552667379997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,},Annotations:map[string]string{io.kubernetes.container.hash: 2cea8ed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3,PodSandboxId:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701132530924570327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,},Anno
tations:map[string]string{io.kubernetes.container.hash: 14c398b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4,PodSandboxId:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701132530337556561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb01d80,},Annotations:map
[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba,PodSandboxId:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701132530037671548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5aa5271b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278,PodSandboxId:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701132529868536366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,},A
nnotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=12e291f1-a7da-42ec-ae86-8805b1210100 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d9ff96d344971       df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55   9 minutes ago       Running             kube-proxy                0                   8957d6ed3cc96       kube-proxy-bv5lq
	be4894b0fbd27       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d414344bff450       storage-provisioner
	a55e3f9ef21a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   80b0964cadf5d       coredns-76f75df574-kbrjg
	e716c8ec94f44       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   0df9afb0e7e3c       etcd-no-preload-473615
	26ee6a4171332       4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9   9 minutes ago       Running             kube-scheduler            2                   76d3768344aa0       kube-scheduler-no-preload-473615
	b6eb74031eeb3       e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7   9 minutes ago       Running             kube-apiserver            2                   7aba4e9fef68a       kube-apiserver-no-preload-473615
	c04934db0c7ab       e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4   9 minutes ago       Running             kube-controller-manager   2                   bfdb5d0121c2a       kube-controller-manager-no-preload-473615
	
	* 
	* ==> coredns [a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:35073 - 35380 "HINFO IN 7970207649571234781.3920572336514307717. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010882015s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-473615
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-473615
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=no-preload-473615
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_48_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-473615
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 00:58:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:54:24 +0000   Tue, 28 Nov 2023 00:48:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:54:24 +0000   Tue, 28 Nov 2023 00:48:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:54:24 +0000   Tue, 28 Nov 2023 00:48:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:54:24 +0000   Tue, 28 Nov 2023 00:48:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.195
	  Hostname:    no-preload-473615
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad5a8ba507ca41a386ef5e8d7f5846b8
	  System UUID:                ad5a8ba5-07ca-41a3-86ef-5e8d7f5846b8
	  Boot ID:                    bdb44941-15f5-4e15-8e88-1f76195dc2ba
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.0
	  Kube-Proxy Version:         v1.29.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-kbrjg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-473615                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-473615             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-473615    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-bv5lq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-473615             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-mpqdq              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node no-preload-473615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node no-preload-473615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node no-preload-473615 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s  kubelet          Node no-preload-473615 status is now: NodeNotReady
	  Normal  NodeReady                9m19s  kubelet          Node no-preload-473615 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m8s   node-controller  Node no-preload-473615 event: Registered Node no-preload-473615 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 00:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070723] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.511762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.475541] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133405] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.473108] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.738423] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.118584] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.146182] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[  +0.118662] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +0.232415] systemd-fstab-generator[727]: Ignoring "noauto" for root device
	[Nov28 00:44] systemd-fstab-generator[1352]: Ignoring "noauto" for root device
	[ +19.530786] kauditd_printk_skb: 34 callbacks suppressed
	[Nov28 00:48] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.339943] systemd-fstab-generator[4120]: Ignoring "noauto" for root device
	[  +9.308883] systemd-fstab-generator[4450]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3] <==
	* {"level":"info","ts":"2023-11-28T00:48:52.439477Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"986b17048fbf010b","local-member-id":"568dd214a70d80b9","added-peer-id":"568dd214a70d80b9","added-peer-peer-urls":["https://192.168.61.195:2380"]}
	{"level":"info","ts":"2023-11-28T00:48:52.453606Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T00:48:52.455376Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"568dd214a70d80b9","initial-advertise-peer-urls":["https://192.168.61.195:2380"],"listen-peer-urls":["https://192.168.61.195:2380"],"advertise-client-urls":["https://192.168.61.195:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.195:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T00:48:52.455088Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.195:2380"}
	{"level":"info","ts":"2023-11-28T00:48:52.455447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.195:2380"}
	{"level":"info","ts":"2023-11-28T00:48:52.456219Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T00:48:52.489138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:52.489199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:52.489227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 received MsgPreVoteResp from 568dd214a70d80b9 at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:52.489239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:52.489245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 received MsgVoteResp from 568dd214a70d80b9 at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:52.489253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 became leader at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:52.48926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 568dd214a70d80b9 elected leader 568dd214a70d80b9 at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:52.493302Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"568dd214a70d80b9","local-member-attributes":"{Name:no-preload-473615 ClientURLs:[https://192.168.61.195:2379]}","request-path":"/0/members/568dd214a70d80b9/attributes","cluster-id":"986b17048fbf010b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T00:48:52.494125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:48:52.49467Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:52.494847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:48:52.498575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.195:2379"}
	{"level":"info","ts":"2023-11-28T00:48:52.498693Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"986b17048fbf010b","local-member-id":"568dd214a70d80b9","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:52.498775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:52.498811Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:52.499189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T00:48:52.499232Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T00:48:52.50516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T00:49:10.648662Z","caller":"traceutil/trace.go:171","msg":"trace[1345949121] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"113.257532ms","start":"2023-11-28T00:49:10.535364Z","end":"2023-11-28T00:49:10.648622Z","steps":["trace[1345949121] 'process raft request'  (duration: 76.798444ms)","trace[1345949121] 'compare'  (duration: 34.061755ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  00:58:17 up 14 min,  0 users,  load average: 0.11, 0.27, 0.25
	Linux no-preload-473615 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba] <==
	* I1128 00:52:12.737463       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:53:54.516951       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:53:54.517409       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1128 00:53:55.517845       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:53:55.518006       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:53:55.518128       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:53:55.518195       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:53:55.518361       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:53:55.519657       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:54:55.518417       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:54:55.518603       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:54:55.518635       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:54:55.520894       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:54:55.520946       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:54:55.520956       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:56:55.519647       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:56:55.520143       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:56:55.520205       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:56:55.521974       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:56:55.522093       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:56:55.522134       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278] <==
	* I1128 00:52:41.267409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="96.446µs"
	E1128 00:53:09.735241       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:53:10.237769       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:53:39.741299       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:53:40.248237       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:54:09.747808       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:54:10.257716       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:54:39.752988       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:54:40.269926       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:55:09.759384       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:55:10.278878       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 00:55:19.264935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="296.214µs"
	I1128 00:55:34.265848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="162.445µs"
	E1128 00:55:39.766279       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:55:40.287532       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:56:09.774435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:56:10.298297       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:56:39.779626       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:56:40.307518       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:57:09.786381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:57:10.316515       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:57:39.793860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:57:40.325303       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:58:09.800517       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:58:10.336164       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb] <==
	* I1128 00:49:13.808877       1 server_others.go:72] "Using iptables proxy"
	I1128 00:49:13.825858       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.195"]
	I1128 00:49:13.883786       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1128 00:49:13.883873       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 00:49:13.883900       1 server_others.go:168] "Using iptables Proxier"
	I1128 00:49:13.887159       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 00:49:13.887496       1 server.go:865] "Version info" version="v1.29.0-rc.0"
	I1128 00:49:13.887548       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:49:13.888944       1 config.go:188] "Starting service config controller"
	I1128 00:49:13.889012       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 00:49:13.889206       1 config.go:97] "Starting endpoint slice config controller"
	I1128 00:49:13.889220       1 config.go:315] "Starting node config controller"
	I1128 00:49:13.889381       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 00:49:13.889226       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 00:49:13.990360       1 shared_informer.go:318] Caches are synced for service config
	I1128 00:49:13.990439       1 shared_informer.go:318] Caches are synced for node config
	I1128 00:49:13.990514       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4] <==
	* W1128 00:48:54.533640       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:48:54.533682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 00:48:54.533592       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:54.533727       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:54.533745       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:54.533787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:54.533988       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:54.534095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:55.457212       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:55.457266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:55.555810       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:48:55.555881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 00:48:55.572607       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:55.572715       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:55.709193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:48:55.709265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 00:48:55.735655       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 00:48:55.735812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1128 00:48:55.779579       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 00:48:55.779638       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:48:55.792797       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 00:48:55.792850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1128 00:48:55.832668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 00:48:55.832757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1128 00:48:58.522458       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:43:29 UTC, ends at Tue 2023-11-28 00:58:17 UTC. --
	Nov 28 00:55:34 no-preload-473615 kubelet[4457]: E1128 00:55:34.245652    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:55:48 no-preload-473615 kubelet[4457]: E1128 00:55:48.244356    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:55:58 no-preload-473615 kubelet[4457]: E1128 00:55:58.321742    4457 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:55:58 no-preload-473615 kubelet[4457]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:55:58 no-preload-473615 kubelet[4457]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:55:58 no-preload-473615 kubelet[4457]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:55:59 no-preload-473615 kubelet[4457]: E1128 00:55:59.242796    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:56:10 no-preload-473615 kubelet[4457]: E1128 00:56:10.243478    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:56:22 no-preload-473615 kubelet[4457]: E1128 00:56:22.244580    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:56:34 no-preload-473615 kubelet[4457]: E1128 00:56:34.244897    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:56:46 no-preload-473615 kubelet[4457]: E1128 00:56:46.244872    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:56:58 no-preload-473615 kubelet[4457]: E1128 00:56:58.320673    4457 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:56:58 no-preload-473615 kubelet[4457]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:56:58 no-preload-473615 kubelet[4457]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:56:58 no-preload-473615 kubelet[4457]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:57:00 no-preload-473615 kubelet[4457]: E1128 00:57:00.253521    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:57:15 no-preload-473615 kubelet[4457]: E1128 00:57:15.244605    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:57:29 no-preload-473615 kubelet[4457]: E1128 00:57:29.243889    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:57:41 no-preload-473615 kubelet[4457]: E1128 00:57:41.244271    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:57:53 no-preload-473615 kubelet[4457]: E1128 00:57:53.244101    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 00:57:58 no-preload-473615 kubelet[4457]: E1128 00:57:58.337156    4457 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 00:57:58 no-preload-473615 kubelet[4457]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 00:57:58 no-preload-473615 kubelet[4457]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 00:57:58 no-preload-473615 kubelet[4457]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 00:58:04 no-preload-473615 kubelet[4457]: E1128 00:58:04.243602    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	
	* 
	* ==> storage-provisioner [be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4] <==
	* I1128 00:49:13.549592       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:49:13.582149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:49:13.582267       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 00:49:13.599096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 00:49:13.600485       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"52391cb7-4015-4290-b5d2-dc1b45117cb2", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-473615_eb5b9af1-1d07-40be-8cb8-3846b7bbc919 became leader
	I1128 00:49:13.601326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-473615_eb5b9af1-1d07-40be-8cb8-3846b7bbc919!
	I1128 00:49:13.702387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-473615_eb5b9af1-1d07-40be-8cb8-3846b7bbc919!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-473615 -n no-preload-473615
E1128 00:58:18.479779   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-473615 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mpqdq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-473615 describe pod metrics-server-57f55c9bc5-mpqdq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-473615 describe pod metrics-server-57f55c9bc5-mpqdq: exit status 1 (65.68666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-mpqdq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-473615 describe pod metrics-server-57f55c9bc5-mpqdq: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 00:51:55.432854   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:53:50.988636   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 00:55:14.035809   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 00:55:27.680572   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 00:56:55.433205   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-732472 -n old-k8s-version-732472
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-28 01:00:17.583855182 +0000 UTC m=+5719.148881806
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-732472 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-732472 logs -n 25: (1.720364519s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-188325                                 | cert-options-188325          | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:33 UTC |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-732472        | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-304541            | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-001086 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | disable-driver-mounts-001086                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:37 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473615             | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC | 28 Nov 23 00:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-732472             | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-488423  | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-304541                 | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473615                  | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-488423       | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC | 28 Nov 23 00:48 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 00:40:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 00:40:42.238362   46126 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:40:42.238498   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238513   46126 out.go:309] Setting ErrFile to fd 2...
	I1128 00:40:42.238520   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238712   46126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:40:42.239236   46126 out.go:303] Setting JSON to false
	I1128 00:40:42.240138   46126 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4989,"bootTime":1701127053,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:40:42.240194   46126 start.go:138] virtualization: kvm guest
	I1128 00:40:42.242505   46126 out.go:177] * [default-k8s-diff-port-488423] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:40:42.243937   46126 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:40:42.243990   46126 notify.go:220] Checking for updates...
	I1128 00:40:42.245317   46126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:40:42.246717   46126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:40:42.248096   46126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:40:42.249294   46126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:40:42.250596   46126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:40:42.252296   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:40:42.252793   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.252854   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.267605   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I1128 00:40:42.267958   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.268457   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.268479   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.268774   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.268971   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.269215   46126 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:40:42.269470   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.269501   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.283984   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I1128 00:40:42.284338   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.284786   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.284808   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.285077   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.285263   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.319077   46126 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 00:40:42.320321   46126 start.go:298] selected driver: kvm2
	I1128 00:40:42.320332   46126 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.320481   46126 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:40:42.321242   46126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.321325   46126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 00:40:42.335477   46126 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 00:40:42.335818   46126 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 00:40:42.335887   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:40:42.335907   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:40:42.335922   46126 start_flags.go:323] config:
	{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-48842
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.336092   46126 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.337823   46126 out.go:177] * Starting control plane node default-k8s-diff-port-488423 in cluster default-k8s-diff-port-488423
	I1128 00:40:40.713025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:42.338980   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:40:42.339010   46126 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 00:40:42.339024   46126 cache.go:56] Caching tarball of preloaded images
	I1128 00:40:42.339105   46126 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 00:40:42.339117   46126 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 00:40:42.339232   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:40:42.339416   46126 start.go:365] acquiring machines lock for default-k8s-diff-port-488423: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:40:43.785024   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:49.865013   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:52.936964   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:59.017058   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:02.089017   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:08.169021   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:11.241040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:17.321032   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:20.393000   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:26.473039   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:29.544989   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:35.625074   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:38.697020   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:44.777040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:47.849040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:53.929055   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:57.001005   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:03.081016   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:06.153078   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:12.233029   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:15.305165   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:21.385067   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:24.457038   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:30.537025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:33.608998   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:39.689061   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:42.761012   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:48.841003   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:51.912985   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:54.916816   45580 start.go:369] acquired machines lock for "embed-certs-304541" in 3m47.030120592s
	I1128 00:42:54.916877   45580 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:42:54.916890   45580 fix.go:54] fixHost starting: 
	I1128 00:42:54.917233   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:42:54.917266   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:42:54.932296   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1128 00:42:54.932712   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:42:54.933230   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:42:54.933254   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:42:54.933574   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:42:54.933837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:42:54.934006   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:42:54.935712   45580 fix.go:102] recreateIfNeeded on embed-certs-304541: state=Stopped err=<nil>
	I1128 00:42:54.935737   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	W1128 00:42:54.935937   45580 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:42:54.937893   45580 out.go:177] * Restarting existing kvm2 VM for "embed-certs-304541" ...
	I1128 00:42:54.914751   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:42:54.914794   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:42:54.916666   45269 machine.go:91] provisioned docker machine in 4m37.413850055s
	I1128 00:42:54.916713   45269 fix.go:56] fixHost completed within 4m37.433506318s
	I1128 00:42:54.916719   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 4m37.433526985s
	W1128 00:42:54.916738   45269 start.go:691] error starting host: provision: host is not running
	W1128 00:42:54.916844   45269 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 00:42:54.916854   45269 start.go:706] Will try again in 5 seconds ...
	I1128 00:42:54.939120   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Start
	I1128 00:42:54.939284   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring networks are active...
	I1128 00:42:54.940122   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network default is active
	I1128 00:42:54.940636   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network mk-embed-certs-304541 is active
	I1128 00:42:54.941025   45580 main.go:141] libmachine: (embed-certs-304541) Getting domain xml...
	I1128 00:42:54.941883   45580 main.go:141] libmachine: (embed-certs-304541) Creating domain...
	I1128 00:42:56.157644   45580 main.go:141] libmachine: (embed-certs-304541) Waiting to get IP...
	I1128 00:42:56.158479   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.158803   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.158888   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.158791   46474 retry.go:31] will retry after 235.266272ms: waiting for machine to come up
	I1128 00:42:56.395238   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.395630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.395664   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.395579   46474 retry.go:31] will retry after 352.110542ms: waiting for machine to come up
	I1128 00:42:56.749150   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.749542   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.749570   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.749500   46474 retry.go:31] will retry after 364.122623ms: waiting for machine to come up
	I1128 00:42:57.115054   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.115497   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.115526   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.115450   46474 retry.go:31] will retry after 583.197763ms: waiting for machine to come up
	I1128 00:42:57.700134   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.700551   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.700577   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.700497   46474 retry.go:31] will retry after 515.615548ms: waiting for machine to come up
	I1128 00:42:59.917964   45269 start.go:365] acquiring machines lock for old-k8s-version-732472: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:42:58.218252   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.218630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.218668   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.218611   46474 retry.go:31] will retry after 690.258077ms: waiting for machine to come up
	I1128 00:42:58.910090   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.910438   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.910464   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.910413   46474 retry.go:31] will retry after 737.779074ms: waiting for machine to come up
	I1128 00:42:59.649308   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:59.649634   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:59.649661   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:59.649609   46474 retry.go:31] will retry after 1.23938471s: waiting for machine to come up
	I1128 00:43:00.890867   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:00.891318   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:00.891356   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:00.891298   46474 retry.go:31] will retry after 1.475598535s: waiting for machine to come up
	I1128 00:43:02.368630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:02.369159   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:02.369189   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:02.369085   46474 retry.go:31] will retry after 2.323321s: waiting for machine to come up
	I1128 00:43:04.695735   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:04.696175   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:04.696208   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:04.696131   46474 retry.go:31] will retry after 1.903335453s: waiting for machine to come up
	I1128 00:43:06.601229   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:06.601657   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:06.601687   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:06.601612   46474 retry.go:31] will retry after 2.205948796s: waiting for machine to come up
	I1128 00:43:08.809792   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:08.810161   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:08.810188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:08.810149   46474 retry.go:31] will retry after 3.31430253s: waiting for machine to come up
	I1128 00:43:12.126852   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:12.127294   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:12.127323   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:12.127249   46474 retry.go:31] will retry after 3.492216742s: waiting for machine to come up
	I1128 00:43:16.981905   45815 start.go:369] acquired machines lock for "no-preload-473615" in 3m38.128436656s
	I1128 00:43:16.981988   45815 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:16.982000   45815 fix.go:54] fixHost starting: 
	I1128 00:43:16.982400   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:16.982434   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:17.001935   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I1128 00:43:17.002390   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:17.002899   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:43:17.002930   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:17.003303   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:17.003515   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:17.003658   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:43:17.005243   45815 fix.go:102] recreateIfNeeded on no-preload-473615: state=Stopped err=<nil>
	I1128 00:43:17.005273   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	W1128 00:43:17.005442   45815 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:17.007831   45815 out.go:177] * Restarting existing kvm2 VM for "no-preload-473615" ...
	I1128 00:43:15.620590   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621046   45580 main.go:141] libmachine: (embed-certs-304541) Found IP for machine: 192.168.50.93
	I1128 00:43:15.621071   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has current primary IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621083   45580 main.go:141] libmachine: (embed-certs-304541) Reserving static IP address...
	I1128 00:43:15.621440   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.621473   45580 main.go:141] libmachine: (embed-certs-304541) DBG | skip adding static IP to network mk-embed-certs-304541 - found existing host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"}
	I1128 00:43:15.621484   45580 main.go:141] libmachine: (embed-certs-304541) Reserved static IP address: 192.168.50.93
	I1128 00:43:15.621498   45580 main.go:141] libmachine: (embed-certs-304541) Waiting for SSH to be available...
	I1128 00:43:15.621516   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Getting to WaitForSSH function...
	I1128 00:43:15.623594   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623865   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.623897   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623968   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH client type: external
	I1128 00:43:15.623989   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa (-rw-------)
	I1128 00:43:15.624044   45580 main.go:141] libmachine: (embed-certs-304541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:15.624057   45580 main.go:141] libmachine: (embed-certs-304541) DBG | About to run SSH command:
	I1128 00:43:15.624068   45580 main.go:141] libmachine: (embed-certs-304541) DBG | exit 0
	I1128 00:43:15.708868   45580 main.go:141] libmachine: (embed-certs-304541) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:15.709246   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetConfigRaw
	I1128 00:43:15.709989   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.712312   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712623   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.712660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712968   45580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/config.json ...
	I1128 00:43:15.713166   45580 machine.go:88] provisioning docker machine ...
	I1128 00:43:15.713183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:15.713360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713552   45580 buildroot.go:166] provisioning hostname "embed-certs-304541"
	I1128 00:43:15.713573   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713731   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.716027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716386   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.716419   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716530   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.716703   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.716856   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.717034   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.717229   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.717565   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.717579   45580 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-304541 && echo "embed-certs-304541" | sudo tee /etc/hostname
	I1128 00:43:15.841766   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-304541
	
	I1128 00:43:15.841821   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.844529   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.844872   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.844919   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.845037   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.845231   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845476   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.845616   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.845976   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.846002   45580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-304541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-304541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-304541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:15.965821   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:15.965855   45580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:15.965876   45580 buildroot.go:174] setting up certificates
	I1128 00:43:15.965890   45580 provision.go:83] configureAuth start
	I1128 00:43:15.965903   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.966183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.968916   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969285   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.969313   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969483   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.971549   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.971913   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.971949   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.972092   45580 provision.go:138] copyHostCerts
	I1128 00:43:15.972168   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:15.972182   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:15.972260   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:15.972415   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:15.972427   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:15.972472   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:15.972562   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:15.972572   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:15.972603   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:15.972663   45580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.embed-certs-304541 san=[192.168.50.93 192.168.50.93 localhost 127.0.0.1 minikube embed-certs-304541]
	I1128 00:43:16.272269   45580 provision.go:172] copyRemoteCerts
	I1128 00:43:16.272333   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:16.272354   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.274793   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275102   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.275138   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275285   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.275495   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.275628   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.275752   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.361853   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:43:16.386340   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:16.410490   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:16.433471   45580 provision.go:86] duration metric: configureAuth took 467.56808ms
	I1128 00:43:16.433505   45580 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:16.433686   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:16.433760   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.436514   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.436987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.437029   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.437129   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.437316   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437472   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437614   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.437748   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.438055   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.438072   45580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:16.732374   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:16.732407   45580 machine.go:91] provisioned docker machine in 1.019227514s
	I1128 00:43:16.732419   45580 start.go:300] post-start starting for "embed-certs-304541" (driver="kvm2")
	I1128 00:43:16.732429   45580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:16.732474   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.732847   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:16.732879   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.735564   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.735987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.736027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.736210   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.736393   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.736549   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.736714   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.824741   45580 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:16.829313   45580 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:16.829347   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:16.829426   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:16.829529   45580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:16.829642   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:16.839740   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:16.862881   45580 start.go:303] post-start completed in 130.432418ms
	I1128 00:43:16.862911   45580 fix.go:56] fixHost completed within 21.946020541s
	I1128 00:43:16.862938   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.865721   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.866144   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866336   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.866545   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866744   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866869   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.867046   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.867350   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.867359   45580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:16.981759   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132196.930241591
	
	I1128 00:43:16.981779   45580 fix.go:206] guest clock: 1701132196.930241591
	I1128 00:43:16.981786   45580 fix.go:219] Guest: 2023-11-28 00:43:16.930241591 +0000 UTC Remote: 2023-11-28 00:43:16.862915941 +0000 UTC m=+249.133993071 (delta=67.32565ms)
	I1128 00:43:16.981804   45580 fix.go:190] guest clock delta is within tolerance: 67.32565ms
	I1128 00:43:16.981809   45580 start.go:83] releasing machines lock for "embed-certs-304541", held for 22.064954687s
	I1128 00:43:16.981848   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.982121   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:16.984621   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.984927   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.984986   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.985171   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985675   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985825   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985892   45580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:16.985926   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.986025   45580 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:16.986054   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.988651   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.988839   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989079   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989367   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989411   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989451   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989491   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989544   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989648   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989692   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989781   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989860   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.989933   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:17.104567   45580 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:17.110844   45580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:17.254201   45580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:17.262078   45580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:17.262154   45580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:17.282179   45580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:17.282209   45580 start.go:472] detecting cgroup driver to use...
	I1128 00:43:17.282271   45580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:17.296891   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:17.311479   45580 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:17.311552   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:17.325946   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:17.340513   45580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:17.469601   45580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:17.605127   45580 docker.go:219] disabling docker service ...
	I1128 00:43:17.605199   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:17.621850   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:17.634608   45580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:17.753009   45580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:17.859260   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:17.872564   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:17.889701   45580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:17.889755   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.898724   45580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:17.898799   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.907565   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.916243   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.925280   45580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:17.934933   45580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:17.943902   45580 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:17.943960   45580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:17.957608   45580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:17.967379   45580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:18.074173   45580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:18.251191   45580 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:18.251264   45580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:18.259963   45580 start.go:540] Will wait 60s for crictl version
	I1128 00:43:18.260041   45580 ssh_runner.go:195] Run: which crictl
	I1128 00:43:18.263936   45580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:18.303087   45580 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:18.303181   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.344939   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.402444   45580 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:17.009281   45815 main.go:141] libmachine: (no-preload-473615) Calling .Start
	I1128 00:43:17.009442   45815 main.go:141] libmachine: (no-preload-473615) Ensuring networks are active...
	I1128 00:43:17.010161   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network default is active
	I1128 00:43:17.010485   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network mk-no-preload-473615 is active
	I1128 00:43:17.010860   45815 main.go:141] libmachine: (no-preload-473615) Getting domain xml...
	I1128 00:43:17.011780   45815 main.go:141] libmachine: (no-preload-473615) Creating domain...
	I1128 00:43:18.289916   45815 main.go:141] libmachine: (no-preload-473615) Waiting to get IP...
	I1128 00:43:18.290892   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.291348   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.291434   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.291321   46604 retry.go:31] will retry after 208.579367ms: waiting for machine to come up
	I1128 00:43:18.501947   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.502401   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.502431   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.502362   46604 retry.go:31] will retry after 296.427399ms: waiting for machine to come up
	I1128 00:43:18.403974   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:18.406811   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407171   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:18.407201   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407459   45580 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:18.411727   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:18.423460   45580 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:18.423570   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:18.463722   45580 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:18.463797   45580 ssh_runner.go:195] Run: which lz4
	I1128 00:43:18.468008   45580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:18.472523   45580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:18.472560   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:43:20.378745   45580 crio.go:444] Took 1.910818 seconds to copy over tarball
	I1128 00:43:20.378836   45580 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:18.801131   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.801707   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.801741   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.801666   46604 retry.go:31] will retry after 355.365314ms: waiting for machine to come up
	I1128 00:43:19.159088   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.159590   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.159628   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.159550   46604 retry.go:31] will retry after 584.908889ms: waiting for machine to come up
	I1128 00:43:19.746379   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.746941   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.746978   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.746901   46604 retry.go:31] will retry after 707.432097ms: waiting for machine to come up
	I1128 00:43:20.455880   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:20.456378   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:20.456402   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:20.456346   46604 retry.go:31] will retry after 598.57984ms: waiting for machine to come up
	I1128 00:43:21.056102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.056548   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.056579   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.056500   46604 retry.go:31] will retry after 742.55033ms: waiting for machine to come up
	I1128 00:43:21.800382   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.800895   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.800926   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.800841   46604 retry.go:31] will retry after 1.138217867s: waiting for machine to come up
	I1128 00:43:22.941401   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:22.941902   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:22.941932   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:22.941867   46604 retry.go:31] will retry after 1.552423219s: waiting for machine to come up
	I1128 00:43:23.310969   45580 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.932089296s)
	I1128 00:43:23.311004   45580 crio.go:451] Took 2.932228 seconds to extract the tarball
	I1128 00:43:23.311017   45580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:43:23.351844   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:23.397599   45580 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:43:23.397625   45580 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:43:23.397705   45580 ssh_runner.go:195] Run: crio config
	I1128 00:43:23.460298   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:23.460326   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:23.460348   45580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:23.460383   45580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.93 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-304541 NodeName:embed-certs-304541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:23.460547   45580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-304541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:23.460641   45580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-304541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:23.460696   45580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:43:23.470334   45580 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:23.470410   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:23.480675   45580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1128 00:43:23.497482   45580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:43:23.513709   45580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1128 00:43:23.530363   45580 ssh_runner.go:195] Run: grep 192.168.50.93	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:23.533938   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:23.546399   45580 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541 for IP: 192.168.50.93
	I1128 00:43:23.546443   45580 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:23.546632   45580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:23.546695   45580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:23.546799   45580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/client.key
	I1128 00:43:23.546892   45580 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key.9bda4d83
	I1128 00:43:23.546960   45580 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key
	I1128 00:43:23.547122   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:23.547178   45580 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:23.547196   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:23.547237   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:23.547280   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:23.547317   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:23.547392   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:23.548287   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:23.571910   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 00:43:23.597339   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:23.621977   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:43:23.648048   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:23.671213   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:23.695307   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:23.719122   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:23.743153   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:23.766469   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:23.789932   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:23.813950   45580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:23.830291   45580 ssh_runner.go:195] Run: openssl version
	I1128 00:43:23.837945   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:23.847572   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852284   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852334   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.860003   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:23.872829   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:23.886286   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.892997   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.893079   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.899923   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:23.909771   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:23.919498   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924066   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924126   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.929583   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:23.939366   45580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:23.944091   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:23.950255   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:23.956493   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:23.962278   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:23.970032   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:23.977660   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:23.984257   45580 kubeadm.go:404] StartCluster: {Name:embed-certs-304541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:23.984408   45580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:23.984471   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:24.026147   45580 cri.go:89] found id: ""
	I1128 00:43:24.026222   45580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:24.035520   45580 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:24.035550   45580 kubeadm.go:636] restartCluster start
	I1128 00:43:24.035631   45580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:24.044318   45580 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.045591   45580 kubeconfig.go:92] found "embed-certs-304541" server: "https://192.168.50.93:8443"
	I1128 00:43:24.047987   45580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:24.056482   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.056541   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.067055   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.067072   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.067108   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.076950   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.577344   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.577441   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.588707   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.077862   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.077965   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.089729   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.577938   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.578019   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.593191   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.077819   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.077891   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.091224   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.577757   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.577844   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.588769   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.077106   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.077235   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.088668   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.577169   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.577249   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.588221   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.496599   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:24.496989   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:24.497018   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:24.496943   46604 retry.go:31] will retry after 2.05343917s: waiting for machine to come up
	I1128 00:43:26.552249   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:26.552684   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:26.552716   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:26.552636   46604 retry.go:31] will retry after 2.338063311s: waiting for machine to come up
	I1128 00:43:28.077161   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.077265   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.088552   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.577077   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.577168   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.588335   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.077927   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.078027   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.089679   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.577193   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.577293   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.077430   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.077542   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.088547   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.577088   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.577203   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.077809   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.077907   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.090329   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.577897   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.577975   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.591561   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.077101   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.077206   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.087945   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.577446   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.577528   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.588542   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.893450   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:28.893812   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:28.893841   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:28.893761   46604 retry.go:31] will retry after 3.578756905s: waiting for machine to come up
	I1128 00:43:32.473719   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:32.474199   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:32.474234   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:32.474155   46604 retry.go:31] will retry after 3.070637163s: waiting for machine to come up
	I1128 00:43:36.805769   46126 start.go:369] acquired machines lock for "default-k8s-diff-port-488423" in 2m54.466321295s
	I1128 00:43:36.805830   46126 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:36.805840   46126 fix.go:54] fixHost starting: 
	I1128 00:43:36.806271   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:36.806311   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:36.825195   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I1128 00:43:36.825723   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:36.826325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:43:36.826348   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:36.826703   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:36.826932   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:36.827106   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:43:36.828683   46126 fix.go:102] recreateIfNeeded on default-k8s-diff-port-488423: state=Stopped err=<nil>
	I1128 00:43:36.828709   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	W1128 00:43:36.828895   46126 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:36.830377   46126 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-488423" ...
	I1128 00:43:36.831614   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Start
	I1128 00:43:36.831781   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring networks are active...
	I1128 00:43:36.832447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network default is active
	I1128 00:43:36.832841   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network mk-default-k8s-diff-port-488423 is active
	I1128 00:43:36.833220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Getting domain xml...
	I1128 00:43:36.833947   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Creating domain...
	I1128 00:43:33.077031   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.077109   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.088430   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:33.578007   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.578093   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.589185   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:34.056684   45580 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:43:34.056718   45580 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:43:34.056733   45580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:43:34.056836   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:34.096078   45580 cri.go:89] found id: ""
	I1128 00:43:34.096157   45580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:43:34.111200   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:43:34.119603   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:43:34.119654   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128150   45580 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128170   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.236389   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.879134   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.070594   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.159436   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.223694   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:43:35.223787   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.238511   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.753955   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.254449   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.753943   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.253987   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.753515   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.777619   45580 api_server.go:72] duration metric: took 2.553922938s to wait for apiserver process to appear ...
	I1128 00:43:37.777646   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:43:35.548294   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.548718   45815 main.go:141] libmachine: (no-preload-473615) Found IP for machine: 192.168.61.195
	I1128 00:43:35.548746   45815 main.go:141] libmachine: (no-preload-473615) Reserving static IP address...
	I1128 00:43:35.548790   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has current primary IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.549194   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.549223   45815 main.go:141] libmachine: (no-preload-473615) DBG | skip adding static IP to network mk-no-preload-473615 - found existing host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"}
	I1128 00:43:35.549238   45815 main.go:141] libmachine: (no-preload-473615) Reserved static IP address: 192.168.61.195
	I1128 00:43:35.549253   45815 main.go:141] libmachine: (no-preload-473615) Waiting for SSH to be available...
	I1128 00:43:35.549265   45815 main.go:141] libmachine: (no-preload-473615) DBG | Getting to WaitForSSH function...
	I1128 00:43:35.551246   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551573   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.551601   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551757   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH client type: external
	I1128 00:43:35.551778   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa (-rw-------)
	I1128 00:43:35.551811   45815 main.go:141] libmachine: (no-preload-473615) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:35.551831   45815 main.go:141] libmachine: (no-preload-473615) DBG | About to run SSH command:
	I1128 00:43:35.551867   45815 main.go:141] libmachine: (no-preload-473615) DBG | exit 0
	I1128 00:43:35.636291   45815 main.go:141] libmachine: (no-preload-473615) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:35.636667   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetConfigRaw
	I1128 00:43:35.637278   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.639799   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640164   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.640209   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640423   45815 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/config.json ...
	I1128 00:43:35.640598   45815 machine.go:88] provisioning docker machine ...
	I1128 00:43:35.640632   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:35.640853   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641071   45815 buildroot.go:166] provisioning hostname "no-preload-473615"
	I1128 00:43:35.641090   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641242   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.643554   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643845   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.643905   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643977   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.644140   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644370   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.644540   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.644971   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.644986   45815 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473615 && echo "no-preload-473615" | sudo tee /etc/hostname
	I1128 00:43:35.766635   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473615
	
	I1128 00:43:35.766689   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.769704   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770068   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.770108   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.770463   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770622   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770733   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.770849   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.771214   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.771235   45815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473615/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:35.889378   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:35.889416   45815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:35.889480   45815 buildroot.go:174] setting up certificates
	I1128 00:43:35.889494   45815 provision.go:83] configureAuth start
	I1128 00:43:35.889506   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.889810   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.892924   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893313   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.893359   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.895759   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896140   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.896169   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896281   45815 provision.go:138] copyHostCerts
	I1128 00:43:35.896345   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:35.896370   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:35.896448   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:35.896565   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:35.896577   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:35.896618   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:35.896713   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:35.896728   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:35.896778   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:35.896856   45815 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.no-preload-473615 san=[192.168.61.195 192.168.61.195 localhost 127.0.0.1 minikube no-preload-473615]
	I1128 00:43:36.080367   45815 provision.go:172] copyRemoteCerts
	I1128 00:43:36.080429   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:36.080451   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.082989   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083327   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.083358   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083529   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.083745   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.083927   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.084077   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.166338   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:36.191867   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:36.214184   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:36.237102   45815 provision.go:86] duration metric: configureAuth took 347.594627ms
	I1128 00:43:36.237135   45815 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:36.237338   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:43:36.237421   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.240408   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240787   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.240826   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240995   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.241193   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241368   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241539   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.241712   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.242000   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.242016   45815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:36.565582   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:36.565609   45815 machine.go:91] provisioned docker machine in 924.985826ms
	I1128 00:43:36.565623   45815 start.go:300] post-start starting for "no-preload-473615" (driver="kvm2")
	I1128 00:43:36.565649   45815 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:36.565677   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.565994   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:36.566025   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.568653   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569032   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.569064   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569148   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.569337   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.569502   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.569678   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.655695   45815 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:36.659909   45815 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:36.659941   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:36.660020   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:36.660108   45815 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:36.660228   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:36.669575   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:36.690970   45815 start.go:303] post-start completed in 125.33198ms
	I1128 00:43:36.690998   45815 fix.go:56] fixHost completed within 19.708998537s
	I1128 00:43:36.691022   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.693929   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694361   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.694400   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694665   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.694877   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695064   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695237   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.695404   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.695738   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.695750   45815 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:36.805602   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132216.779589412
	
	I1128 00:43:36.805626   45815 fix.go:206] guest clock: 1701132216.779589412
	I1128 00:43:36.805637   45815 fix.go:219] Guest: 2023-11-28 00:43:36.779589412 +0000 UTC Remote: 2023-11-28 00:43:36.691003095 +0000 UTC m=+237.986754258 (delta=88.586317ms)
	I1128 00:43:36.805673   45815 fix.go:190] guest clock delta is within tolerance: 88.586317ms
	I1128 00:43:36.805678   45815 start.go:83] releasing machines lock for "no-preload-473615", held for 19.823720426s
	I1128 00:43:36.805705   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.805989   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:36.808864   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809316   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.809346   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809529   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810162   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810361   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810441   45815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:36.810494   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.810824   45815 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:36.810845   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.813747   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.813979   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814064   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814263   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814444   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814471   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814508   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814659   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814764   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.814844   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814913   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.815484   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.815640   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.923054   45815 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:36.930078   45815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:37.082251   45815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:37.088817   45815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:37.088890   45815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:37.110921   45815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:37.110950   45815 start.go:472] detecting cgroup driver to use...
	I1128 00:43:37.111017   45815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:37.128450   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:37.144814   45815 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:37.144875   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:37.158185   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:37.170311   45815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:37.287910   45815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:37.414142   45815 docker.go:219] disabling docker service ...
	I1128 00:43:37.414222   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:37.427085   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:37.438631   45815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:37.559028   45815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:37.676646   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:37.689214   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:37.709298   45815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:37.709370   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.718368   45815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:37.718446   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.727188   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.736230   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.745594   45815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:37.755149   45815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:37.763179   45815 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:37.763237   45815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:37.780091   45815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:37.790861   45815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:37.923396   45815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:38.133933   45815 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:38.134013   45815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:38.143538   45815 start.go:540] Will wait 60s for crictl version
	I1128 00:43:38.143598   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.149212   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:38.205988   45815 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:38.206079   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.261211   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.315398   45815 out.go:177] * Preparing Kubernetes v1.29.0-rc.0 on CRI-O 1.24.1 ...
	I1128 00:43:38.317052   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:38.320262   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320708   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:38.320736   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320976   45815 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:38.325437   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:38.337411   45815 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 00:43:38.337457   45815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:38.384218   45815 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.0". assuming images are not preloaded.
	I1128 00:43:38.384245   45815 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.0 registry.k8s.io/kube-controller-manager:v1.29.0-rc.0 registry.k8s.io/kube-scheduler:v1.29.0-rc.0 registry.k8s.io/kube-proxy:v1.29.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:43:38.384325   45815 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.384533   45815 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.384553   45815 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1128 00:43:38.384634   45815 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.384726   45815 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.384817   45815 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.384870   45815 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.384931   45815 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.386318   45815 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.386368   45815 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1128 00:43:38.386381   45815 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.386373   45815 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.386324   45815 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.386316   45815 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.386319   45815 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.386326   45815 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.526945   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.527246   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1128 00:43:38.538042   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.538097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.539522   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.549538   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.557097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.621381   45815 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" does not exist at hash "4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9" in container runtime
	I1128 00:43:38.621440   45815 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.621516   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.208059   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting to get IP...
	I1128 00:43:38.209168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209599   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.209572   46749 retry.go:31] will retry after 256.562292ms: waiting for machine to come up
	I1128 00:43:38.468199   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468798   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468828   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.468722   46749 retry.go:31] will retry after 287.91937ms: waiting for machine to come up
	I1128 00:43:38.758157   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758610   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758640   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.758555   46749 retry.go:31] will retry after 377.696379ms: waiting for machine to come up
	I1128 00:43:39.138269   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138761   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138795   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.138706   46749 retry.go:31] will retry after 476.093256ms: waiting for machine to come up
	I1128 00:43:39.616256   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616611   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616638   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.616577   46749 retry.go:31] will retry after 628.654941ms: waiting for machine to come up
	I1128 00:43:40.246993   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247498   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247543   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.247455   46749 retry.go:31] will retry after 607.981973ms: waiting for machine to come up
	I1128 00:43:40.857220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857634   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857663   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.857592   46749 retry.go:31] will retry after 866.108704ms: waiting for machine to come up
	I1128 00:43:41.725140   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725695   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725716   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:41.725609   46749 retry.go:31] will retry after 1.158669064s: waiting for machine to come up
	I1128 00:43:37.777663   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.028441   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.028478   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.028492   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.043818   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.043846   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.544532   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.551469   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:42.551505   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.044055   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.050233   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:43.050262   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.544857   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.550155   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:43:43.558929   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:43:43.558962   45580 api_server.go:131] duration metric: took 5.781308354s to wait for apiserver health ...
	I1128 00:43:43.558974   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:43.558984   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:43.560872   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:43:38.775724   45815 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1128 00:43:38.775776   45815 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.775827   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.775953   45815 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1128 00:43:38.776035   45815 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" does not exist at hash "e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7" in container runtime
	I1128 00:43:38.776059   45815 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.776106   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776188   45815 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" does not exist at hash "e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4" in container runtime
	I1128 00:43:38.776220   45815 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.776247   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776315   45815 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.0" does not exist at hash "df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55" in container runtime
	I1128 00:43:38.776335   45815 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.776360   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776456   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.776562   45815 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.776601   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.792457   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.792533   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.792584   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.792634   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.792714   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.929517   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.929640   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.941438   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941544   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941623   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.941704   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.964773   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1128 00:43:38.964890   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:38.964980   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965038   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965118   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1128 00:43:38.965175   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:38.965250   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0 (exists)
	I1128 00:43:38.965262   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.965291   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.970386   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1128 00:43:38.970443   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0 (exists)
	I1128 00:43:38.970458   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0 (exists)
	I1128 00:43:38.974722   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1128 00:43:38.974970   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0 (exists)
	I1128 00:43:39.286976   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143462   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0: (2.178138495s)
	I1128 00:43:41.143491   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0 from cache
	I1128 00:43:41.143520   45815 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143536   45815 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.856517641s)
	I1128 00:43:41.143563   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143596   45815 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1128 00:43:41.143630   45815 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143678   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:43.335836   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.192246706s)
	I1128 00:43:43.335894   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1128 00:43:43.335859   45815 ssh_runner.go:235] Completed: which crictl: (2.192168329s)
	I1128 00:43:43.335938   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335970   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335971   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:42.886014   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886540   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886564   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:42.886457   46749 retry.go:31] will retry after 1.698662705s: waiting for machine to come up
	I1128 00:43:44.586452   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586892   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586917   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:44.586848   46749 retry.go:31] will retry after 1.681392058s: waiting for machine to come up
	I1128 00:43:46.270022   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270545   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270578   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:46.270491   46749 retry.go:31] will retry after 2.061464637s: waiting for machine to come up
	I1128 00:43:43.562274   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:43:43.583729   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:43:43.614704   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:43:43.627543   45580 system_pods.go:59] 8 kube-system pods found
	I1128 00:43:43.627587   45580 system_pods.go:61] "coredns-5dd5756b68-crmfq" [e412b41a-a4a4-4c8c-8fe9-b96c52e5815c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:43:43.627602   45580 system_pods.go:61] "etcd-embed-certs-304541" [ceeea55a-ffbb-4c18-b563-3552f8d47f3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:43:43.627622   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [e7bd6f60-fe90-4413-b906-0101ad3bda9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:43:43.627632   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [e083fd78-3aad-44ed-8bac-fc72eeded7f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:43:43.627652   45580 system_pods.go:61] "kube-proxy-6d4rt" [bc801fd6-e726-41d3-afcf-5ed86723dca9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:43:43.627665   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [df10b58f-43ec-4492-8d95-0d91ee88fec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:43:43.627676   45580 system_pods.go:61] "metrics-server-57f55c9bc5-sx4m7" [1618a041-6077-4076-8178-f2692dc983b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:43:43.627686   45580 system_pods.go:61] "storage-provisioner" [acaed13d-b10c-4fb6-b2b7-452cf928e1e5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:43:43.627696   45580 system_pods.go:74] duration metric: took 12.96707ms to wait for pod list to return data ...
	I1128 00:43:43.627709   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:43:43.632593   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:43:43.632628   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:43:43.632642   45580 node_conditions.go:105] duration metric: took 4.924217ms to run NodePressure ...
	I1128 00:43:43.632667   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:43.945692   45580 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950639   45580 kubeadm.go:787] kubelet initialised
	I1128 00:43:43.950666   45580 kubeadm.go:788] duration metric: took 4.940609ms waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950677   45580 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:43:43.956229   45580 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:45.975328   45580 pod_ready.go:102] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:46.036655   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0: (2.700640635s)
	I1128 00:43:46.036696   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0 from cache
	I1128 00:43:46.036721   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036786   45815 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.700708537s)
	I1128 00:43:46.036846   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1128 00:43:46.036792   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036943   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:48.418287   45815 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.381312759s)
	I1128 00:43:48.418326   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0: (2.381419374s)
	I1128 00:43:48.418339   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1128 00:43:48.418346   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0 from cache
	I1128 00:43:48.418370   45815 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.418426   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.333973   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334509   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:48.334432   46749 retry.go:31] will retry after 3.421790433s: waiting for machine to come up
	I1128 00:43:51.757991   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:51.758448   46749 retry.go:31] will retry after 3.726327818s: waiting for machine to come up
	I1128 00:43:48.484870   45580 pod_ready.go:92] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:48.484903   45580 pod_ready.go:81] duration metric: took 4.52864781s waiting for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:48.484916   45580 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006488   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.006516   45580 pod_ready.go:81] duration metric: took 521.591023ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006528   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014231   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.014258   45580 pod_ready.go:81] duration metric: took 7.721879ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014270   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:51.284611   45580 pod_ready.go:102] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:52.636848   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.218389263s)
	I1128 00:43:52.636883   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1128 00:43:52.636912   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:52.636964   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:56.745904   45269 start.go:369] acquired machines lock for "old-k8s-version-732472" in 56.827856444s
	I1128 00:43:56.745949   45269 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:56.745959   45269 fix.go:54] fixHost starting: 
	I1128 00:43:56.746379   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:56.746447   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:56.764386   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I1128 00:43:56.764907   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:56.765554   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:43:56.765584   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:56.766037   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:56.766221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:43:56.766365   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:43:56.768054   45269 fix.go:102] recreateIfNeeded on old-k8s-version-732472: state=Stopped err=<nil>
	I1128 00:43:56.768082   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	W1128 00:43:56.768219   45269 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:56.771618   45269 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-732472" ...
	I1128 00:43:55.486531   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487099   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Found IP for machine: 192.168.72.242
	I1128 00:43:55.487128   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserving static IP address...
	I1128 00:43:55.487158   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has current primary IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487539   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.487574   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | skip adding static IP to network mk-default-k8s-diff-port-488423 - found existing host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"}
	I1128 00:43:55.487595   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserved static IP address: 192.168.72.242
	I1128 00:43:55.487609   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for SSH to be available...
	I1128 00:43:55.487622   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Getting to WaitForSSH function...
	I1128 00:43:55.489858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490219   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.490253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490324   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH client type: external
	I1128 00:43:55.490373   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa (-rw-------)
	I1128 00:43:55.490414   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:55.490431   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | About to run SSH command:
	I1128 00:43:55.490447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | exit 0
	I1128 00:43:55.584551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:55.584987   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetConfigRaw
	I1128 00:43:55.585628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.588444   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.588889   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.588924   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.589207   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:43:55.589475   46126 machine.go:88] provisioning docker machine ...
	I1128 00:43:55.589501   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:55.589744   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590007   46126 buildroot.go:166] provisioning hostname "default-k8s-diff-port-488423"
	I1128 00:43:55.590031   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590203   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.592733   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593136   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.593170   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593313   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.593480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593756   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.593918   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.594316   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.594333   46126 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-488423 && echo "default-k8s-diff-port-488423" | sudo tee /etc/hostname
	I1128 00:43:55.739338   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-488423
	
	I1128 00:43:55.739368   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.742483   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.742870   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.742906   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.743009   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.743215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743365   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743512   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.743669   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.744119   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.744140   46126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-488423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-488423/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-488423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:55.883117   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:55.883146   46126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:55.883185   46126 buildroot.go:174] setting up certificates
	I1128 00:43:55.883198   46126 provision.go:83] configureAuth start
	I1128 00:43:55.883216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.883566   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.886292   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886625   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.886652   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886796   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.888873   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889213   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.889233   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889347   46126 provision.go:138] copyHostCerts
	I1128 00:43:55.889401   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:55.889413   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:55.889478   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:55.889611   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:55.889623   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:55.889650   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:55.889729   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:55.889738   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:55.889765   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:55.889848   46126 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-488423 san=[192.168.72.242 192.168.72.242 localhost 127.0.0.1 minikube default-k8s-diff-port-488423]
	I1128 00:43:55.945434   46126 provision.go:172] copyRemoteCerts
	I1128 00:43:55.945516   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:55.945547   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.948894   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949387   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.949422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949800   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.950023   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.950215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.950367   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.045647   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:56.069972   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1128 00:43:56.093947   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:56.118840   46126 provision.go:86] duration metric: configureAuth took 235.628083ms
	I1128 00:43:56.118867   46126 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:56.119072   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:56.119159   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.122135   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122514   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.122550   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122680   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.122884   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123076   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.123418   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.123729   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.123746   46126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:56.476330   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:56.476360   46126 machine.go:91] provisioned docker machine in 886.868182ms
	I1128 00:43:56.476384   46126 start.go:300] post-start starting for "default-k8s-diff-port-488423" (driver="kvm2")
	I1128 00:43:56.476399   46126 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:56.476422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.476787   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:56.476824   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.479803   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.480208   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480342   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.480542   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.480729   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.480901   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.574040   46126 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:56.578163   46126 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:56.578186   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:56.578247   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:56.578339   46126 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:56.578455   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:56.586455   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.613452   46126 start.go:303] post-start completed in 137.050871ms
	I1128 00:43:56.613484   46126 fix.go:56] fixHost completed within 19.807643021s
	I1128 00:43:56.613510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.616834   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.617253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.617686   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.617859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.618105   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.618302   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.618618   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.618630   46126 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:56.745691   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132236.690190729
	
	I1128 00:43:56.745711   46126 fix.go:206] guest clock: 1701132236.690190729
	I1128 00:43:56.745731   46126 fix.go:219] Guest: 2023-11-28 00:43:56.690190729 +0000 UTC Remote: 2023-11-28 00:43:56.613489194 +0000 UTC m=+194.421672716 (delta=76.701535ms)
	I1128 00:43:56.745784   46126 fix.go:190] guest clock delta is within tolerance: 76.701535ms
	I1128 00:43:56.745798   46126 start.go:83] releasing machines lock for "default-k8s-diff-port-488423", held for 19.939986738s
	I1128 00:43:56.745837   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.746091   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:56.749097   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749453   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.749486   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749648   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750192   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750392   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750446   46126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:56.750493   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.750661   46126 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:56.750685   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.753480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753655   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753948   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.753976   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754096   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754163   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.754191   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754241   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754327   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754474   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754489   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754621   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.754644   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754779   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.850794   46126 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:56.872044   46126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:57.016328   46126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:57.022389   46126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:57.022463   46126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:57.039925   46126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:57.039959   46126 start.go:472] detecting cgroup driver to use...
	I1128 00:43:57.040030   46126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:57.056385   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:57.068344   46126 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:57.068413   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:57.081752   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:57.095169   46126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:57.192392   46126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:56.772995   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Start
	I1128 00:43:56.773150   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring networks are active...
	I1128 00:43:56.774032   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network default is active
	I1128 00:43:56.774327   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network mk-old-k8s-version-732472 is active
	I1128 00:43:56.774732   45269 main.go:141] libmachine: (old-k8s-version-732472) Getting domain xml...
	I1128 00:43:56.775433   45269 main.go:141] libmachine: (old-k8s-version-732472) Creating domain...
	I1128 00:43:53.781169   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.781193   45580 pod_ready.go:81] duration metric: took 4.766915226s waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.781203   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789370   45580 pod_ready.go:92] pod "kube-proxy-6d4rt" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.789400   45580 pod_ready.go:81] duration metric: took 8.189391ms waiting for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789412   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794277   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.794299   45580 pod_ready.go:81] duration metric: took 4.87905ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794307   45580 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:55.984645   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:57.310000   46126 docker.go:219] disabling docker service ...
	I1128 00:43:57.310066   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:57.324484   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:57.339752   46126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:57.444051   46126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:57.557773   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:57.571662   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:57.591169   46126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:57.591230   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.605399   46126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:57.605462   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.617783   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.629258   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.639844   46126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:57.651810   46126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:57.663353   46126 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:57.663403   46126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:57.679095   46126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:57.688096   46126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:57.795868   46126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:57.970597   46126 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:57.970661   46126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:57.975830   46126 start.go:540] Will wait 60s for crictl version
	I1128 00:43:57.975900   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:43:57.980469   46126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:58.022819   46126 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:58.022932   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.078060   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.130219   46126 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:55.298307   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0: (2.661319898s)
	I1128 00:43:55.298330   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0 from cache
	I1128 00:43:55.298358   45815 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:55.298411   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:56.256987   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1128 00:43:56.257041   45815 cache_images.go:123] Successfully loaded all cached images
	I1128 00:43:56.257048   45815 cache_images.go:92] LoadImages completed in 17.872790347s
	I1128 00:43:56.257142   45815 ssh_runner.go:195] Run: crio config
	I1128 00:43:56.342206   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:43:56.342230   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:56.342248   45815 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:56.342265   45815 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.195 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473615 NodeName:no-preload-473615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:56.342421   45815 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473615"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:56.342519   45815 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:56.342581   45815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.0
	I1128 00:43:56.352200   45815 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:56.352275   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:56.360863   45815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1128 00:43:56.378620   45815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1128 00:43:56.396120   45815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1128 00:43:56.415090   45815 ssh_runner.go:195] Run: grep 192.168.61.195	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:56.419072   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:56.434497   45815 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615 for IP: 192.168.61.195
	I1128 00:43:56.434534   45815 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:56.434702   45815 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:56.434766   45815 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:56.434899   45815 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.key
	I1128 00:43:56.434990   45815 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key.6c770a2d
	I1128 00:43:56.435043   45815 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key
	I1128 00:43:56.435190   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:56.435231   45815 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:56.435249   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:56.435280   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:56.435317   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:56.435348   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:56.435402   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.436170   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:56.464712   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:43:56.492394   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:56.517331   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:43:56.540656   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:56.562997   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:56.587574   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:56.614358   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:56.640027   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:56.666632   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:56.690925   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:56.716816   45815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:56.734079   45815 ssh_runner.go:195] Run: openssl version
	I1128 00:43:56.739942   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:56.751230   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757607   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757662   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.764184   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:56.777196   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:56.788408   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793610   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793667   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.799203   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:56.809821   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:56.820489   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825268   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825335   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.830869   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:56.843707   45815 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:56.848717   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:56.855268   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:56.861889   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:56.867773   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:56.874642   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:56.882143   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:56.889812   45815 kubeadm.go:404] StartCluster: {Name:no-preload-473615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:56.889969   45815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:56.890021   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:56.931994   45815 cri.go:89] found id: ""
	I1128 00:43:56.932061   45815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:56.941996   45815 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:56.942014   45815 kubeadm.go:636] restartCluster start
	I1128 00:43:56.942074   45815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:56.950854   45815 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.951919   45815 kubeconfig.go:92] found "no-preload-473615" server: "https://192.168.61.195:8443"
	I1128 00:43:56.954777   45815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:56.963839   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.963902   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.974803   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.974821   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.974869   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.989023   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.489949   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.490022   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:57.501695   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.989930   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.990014   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.002435   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.489856   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.489946   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.506641   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.131523   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:58.134378   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.134826   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:58.134859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.135087   46126 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:58.139363   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:58.151488   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:58.151552   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:58.193551   46126 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:58.193618   46126 ssh_runner.go:195] Run: which lz4
	I1128 00:43:58.197624   46126 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:58.201842   46126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:58.201875   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:44:00.068140   46126 crio.go:444] Took 1.870561 seconds to copy over tarball
	I1128 00:44:00.068221   46126 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:58.122924   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting to get IP...
	I1128 00:43:58.123826   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.124165   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.124263   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.124146   46882 retry.go:31] will retry after 249.216665ms: waiting for machine to come up
	I1128 00:43:58.374969   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.375510   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.375537   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.375457   46882 retry.go:31] will retry after 317.223146ms: waiting for machine to come up
	I1128 00:43:58.694027   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.694483   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.694535   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.694443   46882 retry.go:31] will retry after 362.880377ms: waiting for machine to come up
	I1128 00:43:59.058976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.059623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.059650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.059571   46882 retry.go:31] will retry after 545.497342ms: waiting for machine to come up
	I1128 00:43:59.606962   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.607607   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.607633   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.607558   46882 retry.go:31] will retry after 678.467206ms: waiting for machine to come up
	I1128 00:44:00.287531   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:00.288062   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:00.288103   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:00.288054   46882 retry.go:31] will retry after 817.7633ms: waiting for machine to come up
	I1128 00:44:01.107179   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:01.107748   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:01.107776   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:01.107690   46882 retry.go:31] will retry after 1.02533736s: waiting for machine to come up
	I1128 00:44:02.134384   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:02.134940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:02.134972   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:02.134867   46882 retry.go:31] will retry after 1.291264059s: waiting for machine to come up
	I1128 00:43:58.491595   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:00.983179   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:58.989453   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.989568   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.006339   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.489912   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.490007   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.505297   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.989924   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.990020   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.004118   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.489346   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.489421   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.504026   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.989739   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.989828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.006279   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.489872   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.489975   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.504734   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.989185   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.989269   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.000313   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.489165   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.489246   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.505444   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.989956   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.990024   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.003038   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.489556   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.489663   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.502192   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.282407   46126 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.2141625s)
	I1128 00:44:03.282432   46126 crio.go:451] Took 3.214263 seconds to extract the tarball
	I1128 00:44:03.282440   46126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:03.324470   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:03.375858   46126 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:44:03.375881   46126 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:44:03.375944   46126 ssh_runner.go:195] Run: crio config
	I1128 00:44:03.440441   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:03.440462   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:03.440479   46126 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:03.440496   46126 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.242 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-488423 NodeName:default-k8s-diff-port-488423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:44:03.440670   46126 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.242
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-488423"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:03.440746   46126 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-488423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1128 00:44:03.440830   46126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:44:03.450060   46126 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:03.450138   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:03.458748   46126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1128 00:44:03.475315   46126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:03.492886   46126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1128 00:44:03.509665   46126 ssh_runner.go:195] Run: grep 192.168.72.242	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:03.513441   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:03.527336   46126 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423 for IP: 192.168.72.242
	I1128 00:44:03.527373   46126 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:03.527539   46126 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:03.527592   46126 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:03.527690   46126 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.key
	I1128 00:44:03.527770   46126 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key.05574f60
	I1128 00:44:03.527827   46126 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key
	I1128 00:44:03.527966   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:03.528009   46126 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:03.528024   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:03.528062   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:03.528098   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:03.528133   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:03.528188   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:03.528787   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:03.553210   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:03.578548   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:03.604661   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:03.627640   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:03.653147   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:03.681991   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:03.706068   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:03.730092   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:03.751326   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:03.776165   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:03.801844   46126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:03.819762   46126 ssh_runner.go:195] Run: openssl version
	I1128 00:44:03.826895   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:03.836806   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842921   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842983   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.848802   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:03.859065   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:03.869720   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874600   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874670   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.880712   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:03.891524   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:03.901286   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906102   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906163   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.911563   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:03.921606   46126 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:03.926553   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:03.932640   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:03.938482   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:03.944483   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:03.950430   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:03.956197   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:03.962543   46126 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:03.962647   46126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:03.962700   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:04.014418   46126 cri.go:89] found id: ""
	I1128 00:44:04.014499   46126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:04.024132   46126 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:04.024178   46126 kubeadm.go:636] restartCluster start
	I1128 00:44:04.024239   46126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:04.032856   46126 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.034010   46126 kubeconfig.go:92] found "default-k8s-diff-port-488423" server: "https://192.168.72.242:8444"
	I1128 00:44:04.036458   46126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:04.044461   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.044513   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.054697   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.054714   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.054759   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.066995   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.567687   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.567784   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.579528   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.067882   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.067970   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.082579   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.568116   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.568240   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.579606   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.067125   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.067229   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.078637   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.567159   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.567258   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.578623   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:07.067770   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.067864   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.081883   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.427919   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:03.428413   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:03.428442   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:03.428350   46882 retry.go:31] will retry after 1.150784696s: waiting for machine to come up
	I1128 00:44:04.580519   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:04.580976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:04.581008   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:04.580941   46882 retry.go:31] will retry after 1.981268381s: waiting for machine to come up
	I1128 00:44:06.564123   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:06.564623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:06.564641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:06.564596   46882 retry.go:31] will retry after 2.79895226s: waiting for machine to come up
	I1128 00:44:02.984445   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:05.483562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:03.989899   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.995828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.009197   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.489749   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.489829   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.501445   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.989934   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.990019   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.004077   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.489549   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.489634   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.501227   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.989858   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.989940   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.003151   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.489699   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.489785   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.502937   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.964667   45815 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:06.964705   45815 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:06.964720   45815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:06.964808   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:07.008487   45815 cri.go:89] found id: ""
	I1128 00:44:07.008572   45815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:07.028576   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:07.040057   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:07.040130   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050063   45815 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050085   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:07.199305   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.265283   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.065924411s)
	I1128 00:44:08.265324   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.468254   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.570027   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.650823   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:08.650900   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:08.667640   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:07.567667   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.567751   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.580778   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.067282   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.067368   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.080618   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.567146   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.567232   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.580324   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.067606   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.067728   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.083426   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.567987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.568084   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.579657   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.067205   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.067292   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.082466   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.568064   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.568159   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.583356   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.067987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.068114   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.084486   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.567945   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.568057   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.583108   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:12.068099   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.068186   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.079172   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.366118   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:09.366642   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:09.366677   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:09.366580   46882 retry.go:31] will retry after 2.538437833s: waiting for machine to come up
	I1128 00:44:11.906292   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:11.906799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:11.906823   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:11.906751   46882 retry.go:31] will retry after 4.351501946s: waiting for machine to come up
	I1128 00:44:07.983966   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.985333   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:12.483805   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.182449   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:09.681686   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.181905   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.681692   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.181652   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.209900   45815 api_server.go:72] duration metric: took 2.559073582s to wait for apiserver process to appear ...
	I1128 00:44:11.209935   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:11.209954   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.242230   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.242261   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.242276   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.285509   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.285538   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.786232   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.791529   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:15.791565   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.285909   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.290996   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:16.291040   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.786199   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.792488   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:44:16.805778   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:44:16.805807   45815 api_server.go:131] duration metric: took 5.595863517s to wait for apiserver health ...
	I1128 00:44:16.805817   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:44:16.805825   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:16.807924   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:12.567969   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.568085   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.579496   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.068092   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.068164   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.079081   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.567677   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.567773   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.579000   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:14.044782   46126 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:14.044818   46126 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:14.044832   46126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:14.044927   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:14.090411   46126 cri.go:89] found id: ""
	I1128 00:44:14.090487   46126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:14.106216   46126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:14.116309   46126 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:14.116367   46126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125060   46126 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125082   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.259194   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.923712   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.113501   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.221455   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.317171   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:15.317269   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.332625   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.847268   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.347347   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.847441   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.259741   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260326   45269 main.go:141] libmachine: (old-k8s-version-732472) Found IP for machine: 192.168.39.172
	I1128 00:44:16.260347   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserving static IP address...
	I1128 00:44:16.260368   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has current primary IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.260978   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | skip adding static IP to network mk-old-k8s-version-732472 - found existing host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"}
	I1128 00:44:16.261003   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Getting to WaitForSSH function...
	I1128 00:44:16.261021   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserved static IP address: 192.168.39.172
	I1128 00:44:16.261037   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting for SSH to be available...
	I1128 00:44:16.264000   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264370   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.264402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264496   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH client type: external
	I1128 00:44:16.264560   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa (-rw-------)
	I1128 00:44:16.264600   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:44:16.264624   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | About to run SSH command:
	I1128 00:44:16.264641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | exit 0
	I1128 00:44:16.373651   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | SSH cmd err, output: <nil>: 
	I1128 00:44:16.374185   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetConfigRaw
	I1128 00:44:16.374992   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.378530   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.378958   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.378987   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.379390   45269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/config.json ...
	I1128 00:44:16.379622   45269 machine.go:88] provisioning docker machine ...
	I1128 00:44:16.379646   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:16.379854   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380005   45269 buildroot.go:166] provisioning hostname "old-k8s-version-732472"
	I1128 00:44:16.380024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380152   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.382908   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383346   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.383376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383604   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.383824   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384179   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.384365   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.384875   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.384902   45269 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-732472 && echo "old-k8s-version-732472" | sudo tee /etc/hostname
	I1128 00:44:16.547302   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-732472
	
	I1128 00:44:16.547378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.550883   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551409   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.551448   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551634   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.551888   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552113   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552258   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.552465   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.552965   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.552994   45269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-732472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-732472/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-732472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:44:16.705539   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:44:16.705577   45269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:44:16.705601   45269 buildroot.go:174] setting up certificates
	I1128 00:44:16.705611   45269 provision.go:83] configureAuth start
	I1128 00:44:16.705622   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.705962   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.708726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709231   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.709283   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709531   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.712023   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712491   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.712524   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712658   45269 provision.go:138] copyHostCerts
	I1128 00:44:16.712720   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:44:16.712734   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:44:16.712835   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:44:16.712990   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:44:16.713005   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:44:16.713041   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:44:16.713154   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:44:16.713168   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:44:16.713201   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:44:16.713291   45269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-732472 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube old-k8s-version-732472]
	I1128 00:44:17.255079   45269 provision.go:172] copyRemoteCerts
	I1128 00:44:17.255157   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:44:17.255184   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.258078   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258486   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.258522   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258704   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.258892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.259071   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.259278   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.360891   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:44:14.981992   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.984334   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.809569   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:16.837545   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:16.884377   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:16.901252   45815 system_pods.go:59] 9 kube-system pods found
	I1128 00:44:16.901296   45815 system_pods.go:61] "coredns-76f75df574-54p94" [fc2580d3-8c03-46c8-aa43-fce9472a4bbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901310   45815 system_pods.go:61] "coredns-76f75df574-9ptz7" [c51a1796-37bb-411b-8477-fb4d8c7e7cb2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901322   45815 system_pods.go:61] "etcd-no-preload-473615" [c789418f-23b1-4e84-95df-e339afc358e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:16.901337   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [204c5f02-7e14-4761-9af0-606f227dee63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:16.901351   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [2d96a78f-b0c9-4731-a8a1-ec63459a09ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:16.901368   45815 system_pods.go:61] "kube-proxy-trr4j" [df593d3d-db4c-45f9-ad79-f35fe2cdef84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:16.901379   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [5fe2c87b-af8b-4184-8b62-399e488dcb5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:16.901393   45815 system_pods.go:61] "metrics-server-57f55c9bc5-lh4m8" [4c3ae55b-befb-44d2-8982-592acdf3eab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:16.901408   45815 system_pods.go:61] "storage-provisioner" [a3e71dd4-570e-4895-aac4-d98dfbd69a6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:16.901423   45815 system_pods.go:74] duration metric: took 17.023663ms to wait for pod list to return data ...
	I1128 00:44:16.901434   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:16.905738   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:16.905766   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:16.905776   45815 node_conditions.go:105] duration metric: took 4.335236ms to run NodePressure ...
	I1128 00:44:16.905791   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:17.532813   45815 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548788   45815 kubeadm.go:787] kubelet initialised
	I1128 00:44:17.548814   45815 kubeadm.go:788] duration metric: took 15.969396ms waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548824   45815 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:17.569590   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:17.388160   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:44:17.415589   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:44:17.443880   45269 provision.go:86] duration metric: configureAuth took 738.257631ms
	I1128 00:44:17.443913   45269 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:44:17.444142   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:44:17.444240   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.447355   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447699   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.447726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447980   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.448213   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448382   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448542   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.448730   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.449148   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.449173   45269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:44:17.825162   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:44:17.825202   45269 machine.go:91] provisioned docker machine in 1.445550198s
	I1128 00:44:17.825215   45269 start.go:300] post-start starting for "old-k8s-version-732472" (driver="kvm2")
	I1128 00:44:17.825229   45269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:44:17.825255   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:17.825631   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:44:17.825665   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.829047   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.829813   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829885   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.830108   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.830270   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.830427   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.933926   45269 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:44:17.939164   45269 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:44:17.939192   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:44:17.939273   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:44:17.939364   45269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:44:17.939481   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:44:17.950901   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:17.983827   45269 start.go:303] post-start completed in 158.593642ms
	I1128 00:44:17.983856   45269 fix.go:56] fixHost completed within 21.237897087s
	I1128 00:44:17.983880   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.988473   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.988983   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.989011   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.989353   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.989611   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989755   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989981   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.990202   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.990729   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.990748   45269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:44:18.139114   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132258.087547922
	
	I1128 00:44:18.139142   45269 fix.go:206] guest clock: 1701132258.087547922
	I1128 00:44:18.139154   45269 fix.go:219] Guest: 2023-11-28 00:44:18.087547922 +0000 UTC Remote: 2023-11-28 00:44:17.983860571 +0000 UTC m=+360.654750753 (delta=103.687351ms)
	I1128 00:44:18.139206   45269 fix.go:190] guest clock delta is within tolerance: 103.687351ms
	I1128 00:44:18.139217   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 21.393285553s
	I1128 00:44:18.139256   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.139552   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:18.142899   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.143407   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143562   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144123   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144308   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144414   45269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:44:18.144473   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.144586   45269 ssh_runner.go:195] Run: cat /version.json
	I1128 00:44:18.144614   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.147761   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.147994   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148459   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148542   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148581   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148605   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148878   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.148892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.149080   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149094   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149266   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149288   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149473   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.149488   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.271569   45269 ssh_runner.go:195] Run: systemctl --version
	I1128 00:44:18.277814   45269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:44:18.432301   45269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:44:18.438677   45269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:44:18.438749   45269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:44:18.455128   45269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:44:18.455155   45269 start.go:472] detecting cgroup driver to use...
	I1128 00:44:18.455229   45269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:44:18.472928   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:44:18.490329   45269 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:44:18.490409   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:44:18.505705   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:44:18.523509   45269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:44:18.696691   45269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:44:18.829641   45269 docker.go:219] disabling docker service ...
	I1128 00:44:18.829775   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:44:18.847903   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:44:18.863690   45269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:44:19.002181   45269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:44:19.130955   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:44:19.146034   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:44:19.165714   45269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 00:44:19.165790   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.176303   45269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:44:19.176368   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.186698   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.196137   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.205054   45269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:44:19.215067   45269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:44:19.224332   45269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:44:19.224376   45269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:44:19.238079   45269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:44:19.246692   45269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:44:19.360913   45269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:44:19.548488   45269 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:44:19.548563   45269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:44:19.553293   45269 start.go:540] Will wait 60s for crictl version
	I1128 00:44:19.553362   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:19.557103   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:44:19.605572   45269 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:44:19.605662   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.655808   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.709415   45269 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1128 00:44:17.346814   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.847354   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.878161   46126 api_server.go:72] duration metric: took 2.560990106s to wait for apiserver process to appear ...
	I1128 00:44:17.878189   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:17.878218   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.878696   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:17.878732   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.879110   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:18.379800   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:19.710653   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:19.713912   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714358   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:19.714402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714586   45269 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:44:19.719516   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:19.736367   45269 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 00:44:19.736422   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:19.788917   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:19.789021   45269 ssh_runner.go:195] Run: which lz4
	I1128 00:44:19.793502   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:44:19.797933   45269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:44:19.797967   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1128 00:44:21.595649   45269 crio.go:444] Took 1.802185 seconds to copy over tarball
	I1128 00:44:21.595754   45269 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:44:19.483696   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:21.485632   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:19.612824   45815 pod_ready.go:102] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:22.111469   45815 pod_ready.go:92] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.111506   45815 pod_ready.go:81] duration metric: took 4.541884744s waiting for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.111522   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118896   45815 pod_ready.go:92] pod "coredns-76f75df574-9ptz7" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.118916   45815 pod_ready.go:81] duration metric: took 7.386009ms waiting for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118925   45815 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.651574   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.651606   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.651632   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.731086   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.731124   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.879396   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.889686   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:22.889721   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.380219   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.387416   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.387458   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.880170   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.886215   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.886286   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:24.380095   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:24.387531   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:44:24.411131   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:44:24.411169   46126 api_server.go:131] duration metric: took 6.532961174s to wait for apiserver health ...
	I1128 00:44:24.411180   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:24.411186   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:24.701599   46126 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:24.853101   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:24.878687   46126 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:24.924669   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:24.942030   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:44:24.942063   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:24.942074   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:24.942084   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:24.942094   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:24.942104   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:24.942115   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:24.942134   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:24.942152   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:24.942163   46126 system_pods.go:74] duration metric: took 17.475554ms to wait for pod list to return data ...
	I1128 00:44:24.942224   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:26.037379   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:26.037423   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:26.037450   46126 node_conditions.go:105] duration metric: took 1.095218932s to run NodePressure ...
	I1128 00:44:26.037473   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:27.084620   46126 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.047120714s)
	I1128 00:44:27.084659   46126 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100248   46126 kubeadm.go:787] kubelet initialised
	I1128 00:44:27.100282   46126 kubeadm.go:788] duration metric: took 15.606572ms waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100293   46126 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:27.108069   46126 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.117188   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117221   46126 pod_ready.go:81] duration metric: took 9.127662ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.117238   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117247   46126 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.123182   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123213   46126 pod_ready.go:81] duration metric: took 5.9547ms waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.123226   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123235   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.130170   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130196   46126 pod_ready.go:81] duration metric: took 6.952194ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.130209   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130216   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.136895   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136925   46126 pod_ready.go:81] duration metric: took 6.699975ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.136940   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136950   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:24.811723   45269 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.215918902s)
	I1128 00:44:24.811757   45269 crio.go:451] Took 3.216081 seconds to extract the tarball
	I1128 00:44:24.811769   45269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:24.856120   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:24.918138   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:24.918185   45269 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:44:24.918257   45269 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.918296   45269 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.918305   45269 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1128 00:44:24.918314   45269 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.918297   45269 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.918261   45269 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.918264   45269 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.918585   45269 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.919955   45269 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.919959   45269 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.919988   45269 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.919964   45269 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.920093   45269 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.920302   45269 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.920482   45269 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.920497   45269 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1128 00:44:25.041095   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.048823   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.071401   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1128 00:44:25.073489   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.081089   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.083887   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.100582   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.150855   45269 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1128 00:44:25.150909   45269 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.150960   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.151148   45269 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1128 00:44:25.151198   45269 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.151250   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.181984   45269 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1128 00:44:25.182039   45269 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1128 00:44:25.182089   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.260634   45269 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1128 00:44:25.260687   45269 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.260744   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269386   45269 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1128 00:44:25.269436   45269 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1128 00:44:25.269460   45269 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.269480   45269 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.269508   45269 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1128 00:44:25.269517   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269539   45269 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.269552   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269573   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269626   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.269642   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.269701   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1128 00:44:25.269733   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.368354   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1128 00:44:25.368405   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1128 00:44:25.368462   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1128 00:44:25.368474   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.368536   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.368537   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.375204   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.375378   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1128 00:44:25.439797   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1128 00:44:25.465699   45269 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1128 00:44:25.465731   45269 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465788   45269 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465795   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1128 00:44:25.465810   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1128 00:44:25.797872   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:27.031275   45269 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.233351991s)
	I1128 00:44:27.031525   45269 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.565711109s)
	I1128 00:44:27.031549   45269 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1128 00:44:27.031594   45269 cache_images.go:92] LoadImages completed in 2.113388877s
	W1128 00:44:27.031667   45269 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1128 00:44:27.031754   45269 ssh_runner.go:195] Run: crio config
	I1128 00:44:27.100851   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:27.100882   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:27.100901   45269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:27.100924   45269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-732472 NodeName:old-k8s-version-732472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1128 00:44:27.101119   45269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-732472"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-732472
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.172:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:27.101241   45269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-732472 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:44:27.101312   45269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1128 00:44:27.111964   45269 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:27.112049   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:27.122796   45269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1128 00:44:27.149768   45269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:27.168520   45269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1128 00:44:27.187296   45269 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:27.191606   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:27.205482   45269 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472 for IP: 192.168.39.172
	I1128 00:44:27.205521   45269 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:27.205720   45269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:27.205758   45269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:27.205825   45269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.key
	I1128 00:44:27.205885   45269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key.ee96354a
	I1128 00:44:27.205931   45269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key
	I1128 00:44:27.206060   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:27.206115   45269 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:27.206130   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:27.206176   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:27.206214   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:27.206251   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:27.206313   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:27.207009   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:27.233932   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:27.258138   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:27.282203   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:27.309304   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:27.335945   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:27.360118   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:23.984808   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.118398   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:27.491683   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491724   46126 pod_ready.go:81] duration metric: took 354.756767ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.491736   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491745   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.890269   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890299   46126 pod_ready.go:81] duration metric: took 398.544263ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.890316   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890324   46126 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.289016   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289043   46126 pod_ready.go:81] duration metric: took 398.709637ms waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:28.289055   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289062   46126 pod_ready.go:38] duration metric: took 1.188759196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:28.289084   46126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:44:28.301648   46126 ops.go:34] apiserver oom_adj: -16
	I1128 00:44:28.301676   46126 kubeadm.go:640] restartCluster took 24.277487612s
	I1128 00:44:28.301683   46126 kubeadm.go:406] StartCluster complete in 24.339149368s
	I1128 00:44:28.301697   46126 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.301770   46126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:44:28.303560   46126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.303802   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:44:28.303915   46126 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:44:28.303994   46126 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304023   46126 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304038   46126 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:44:28.304040   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:44:28.304063   46126 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304117   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304118   46126 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304134   46126 addons.go:240] addon metrics-server should already be in state true
	I1128 00:44:28.304220   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304547   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304589   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304669   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304741   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304928   46126 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304956   46126 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-488423"
	I1128 00:44:28.305388   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.305437   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.310450   46126 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-488423" context rescaled to 1 replicas
	I1128 00:44:28.310496   46126 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:44:28.312602   46126 out.go:177] * Verifying Kubernetes components...
	I1128 00:44:28.314027   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:44:28.321407   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I1128 00:44:28.321423   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I1128 00:44:28.322247   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322287   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322797   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322820   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.322942   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322968   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.323210   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323242   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I1128 00:44:28.323323   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323556   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.323775   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323818   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323857   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323891   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323937   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.323957   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.324293   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.324471   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.327954   46126 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.327972   46126 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:44:28.327993   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.328327   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.328355   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.342376   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I1128 00:44:28.342789   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.343325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.343366   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.343751   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.343978   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I1128 00:44:28.343995   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.344392   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.344983   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.345009   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.345366   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.345910   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.348242   46126 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:28.346449   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I1128 00:44:28.350126   46126 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.350147   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:44:28.350166   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.346666   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.350250   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.348589   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.350911   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.350930   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.351442   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.351817   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.353691   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.353876   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.355460   46126 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:44:24.141365   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.518655   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.887843   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.887877   45815 pod_ready.go:81] duration metric: took 4.768943982s waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.887891   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909504   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.909600   45815 pod_ready.go:81] duration metric: took 21.699474ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909627   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.354335   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.354504   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.357068   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:44:28.357088   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:44:28.357094   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.357109   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.357228   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.357356   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.357475   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.360015   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360725   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.360785   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360994   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.361177   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.361341   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.361503   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.368150   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I1128 00:44:28.368511   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.369005   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.369023   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.369326   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.369481   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.370807   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.371066   46126 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.371078   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:44:28.371092   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.373819   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374409   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.374510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.374541   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374602   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.374688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.374768   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.474380   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.505183   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:44:28.505206   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:44:28.536550   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.584832   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:44:28.584857   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:44:28.626477   46126 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 00:44:28.626473   46126 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:28.644406   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:28.644436   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:44:28.671872   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:29.867337   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330746736s)
	I1128 00:44:29.867437   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867451   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867490   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.393076585s)
	I1128 00:44:29.867532   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867553   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867827   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.867841   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.867850   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867988   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868006   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868029   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.868038   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.868129   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.868145   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868159   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868381   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868400   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868429   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.876482   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.876505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.876724   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.876736   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885484   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213575767s)
	I1128 00:44:29.885534   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885841   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.885862   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885873   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885883   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885887   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886153   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886164   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.886194   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.886211   46126 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-488423"
	I1128 00:44:29.889173   46126 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:44:29.890607   46126 addons.go:502] enable addons completed in 1.586699021s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:44:30.716680   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.385529   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:27.411354   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:27.439142   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:27.466763   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:27.497738   45269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:27.518132   45269 ssh_runner.go:195] Run: openssl version
	I1128 00:44:27.524720   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:27.537673   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542561   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542623   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.548137   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:27.558112   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:27.568318   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573638   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573697   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.579739   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:27.589908   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:27.599937   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606264   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606340   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.612850   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:27.623388   45269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:27.628140   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:27.634670   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:27.642071   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:27.650207   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:27.656836   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:27.662837   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:27.668909   45269 kubeadm.go:404] StartCluster: {Name:old-k8s-version-732472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:27.669005   45269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:27.669075   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:27.711918   45269 cri.go:89] found id: ""
	I1128 00:44:27.711993   45269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:27.722058   45269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:27.722084   45269 kubeadm.go:636] restartCluster start
	I1128 00:44:27.722146   45269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:27.731619   45269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.733224   45269 kubeconfig.go:92] found "old-k8s-version-732472" server: "https://192.168.39.172:8443"
	I1128 00:44:27.736867   45269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:27.747794   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.747862   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.762055   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.762079   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.762146   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.773241   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.273910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.274001   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.286159   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.773393   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.773492   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.785781   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.274130   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.274199   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.289388   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.773916   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.774022   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.789483   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.273920   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.274026   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.285579   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.773910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.774005   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.785536   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.273906   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.273977   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.285344   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.774284   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.774352   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.786435   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.273928   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.274008   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.289424   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.484735   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.983088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:28.945293   45815 pod_ready.go:102] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.445111   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.445133   45815 pod_ready.go:81] duration metric: took 3.535488087s waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.445143   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450322   45815 pod_ready.go:92] pod "kube-proxy-trr4j" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.450342   45815 pod_ready.go:81] duration metric: took 5.193276ms waiting for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450350   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455002   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.455021   45815 pod_ready.go:81] duration metric: took 4.664949ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455030   45815 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:32.915566   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.717086   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:33.216905   46126 node_ready.go:49] node "default-k8s-diff-port-488423" has status "Ready":"True"
	I1128 00:44:33.216930   46126 node_ready.go:38] duration metric: took 4.590426391s waiting for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:33.216938   46126 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:33.223257   46126 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744567   46126 pod_ready.go:92] pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:33.744592   46126 pod_ready.go:81] duration metric: took 521.313062ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744601   46126 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:35.763867   46126 pod_ready.go:102] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.773549   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.773643   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.785461   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.273911   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.273994   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.285646   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.773944   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.774046   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.786576   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.273902   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.273969   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.285791   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.773895   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.773965   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.785934   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.273675   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.273738   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.285549   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.773954   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.774041   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.786010   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.273591   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.273659   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.284794   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.773864   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.773931   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.786610   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:37.273899   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:37.274025   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:37.285678   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.983159   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:34.985149   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.482210   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:35.413821   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.417790   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.768358   46126 pod_ready.go:92] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.768398   46126 pod_ready.go:81] duration metric: took 4.023788643s waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.768411   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775805   46126 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.775835   46126 pod_ready.go:81] duration metric: took 7.41435ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775847   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788110   46126 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.788139   46126 pod_ready.go:81] duration metric: took 12.28235ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788151   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018402   46126 pod_ready.go:92] pod "kube-proxy-2sfbm" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.018426   46126 pod_ready.go:81] duration metric: took 230.267334ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018443   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818531   46126 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.818559   46126 pod_ready.go:81] duration metric: took 800.108369ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818572   46126 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:41.127953   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.748214   45269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:37.748260   45269 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:37.748276   45269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:37.748334   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:37.796781   45269 cri.go:89] found id: ""
	I1128 00:44:37.796866   45269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:37.814832   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:37.824395   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:37.824469   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833592   45269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833618   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:37.955071   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:38.939529   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.160852   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.243789   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.372434   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:39.372525   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.405594   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.927024   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.426600   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.927163   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.966905   45269 api_server.go:72] duration metric: took 1.594470962s to wait for apiserver process to appear ...
	I1128 00:44:40.966937   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:40.966959   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967412   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:40.967457   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967851   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:41.468536   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:39.483204   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:41.483578   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:39.914738   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:42.415305   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:43.130157   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:45.628970   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.468813   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1128 00:44:46.468859   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:43.984318   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.483855   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:44.914911   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.415274   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.435553   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.435586   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.435601   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.480977   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.481002   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.481012   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.506064   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.506098   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.968355   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.974731   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:47.974766   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.468954   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.484597   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:48.484627   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.968810   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.979310   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:44:48.987751   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:44:48.987782   45269 api_server.go:131] duration metric: took 8.020836981s to wait for apiserver health ...
	I1128 00:44:48.987793   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:48.987801   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:48.989720   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:48.129394   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:50.130239   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:48.991320   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:49.001231   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:49.019895   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:49.027389   45269 system_pods.go:59] 7 kube-system pods found
	I1128 00:44:49.027417   45269 system_pods.go:61] "coredns-5644d7b6d9-9sh7z" [dcc226fb-5fd9-4757-bd93-1113f185cdce] Running
	I1128 00:44:49.027422   45269 system_pods.go:61] "etcd-old-k8s-version-732472" [a5899a5a-4812-41e1-9251-78fdaeea9597] Running
	I1128 00:44:49.027428   45269 system_pods.go:61] "kube-apiserver-old-k8s-version-732472" [13d2df8c-84a3-4bd4-8eab-ed9f732a3839] Running
	I1128 00:44:49.027435   45269 system_pods.go:61] "kube-controller-manager-old-k8s-version-732472" [6dc1e479-1a3a-4b9e-acd6-1183a25aece4] Running
	I1128 00:44:49.027441   45269 system_pods.go:61] "kube-proxy-jqrks" [e8fd665a-099e-4941-a8f2-917d2b864eeb] Running
	I1128 00:44:49.027447   45269 system_pods.go:61] "kube-scheduler-old-k8s-version-732472" [de147a31-927e-4051-b6ae-05ddf59290c8] Running
	I1128 00:44:49.027457   45269 system_pods.go:61] "storage-provisioner" [8d7e725e-6c26-4435-8605-88c7d924f5ca] Running
	I1128 00:44:49.027469   45269 system_pods.go:74] duration metric: took 7.544096ms to wait for pod list to return data ...
	I1128 00:44:49.027479   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:49.032133   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:49.032170   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:49.032183   45269 node_conditions.go:105] duration metric: took 4.695493ms to run NodePressure ...
	I1128 00:44:49.032203   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:49.293443   45269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:49.297880   45269 retry.go:31] will retry after 216.894607ms: kubelet not initialised
	I1128 00:44:49.528912   45269 retry.go:31] will retry after 354.406288ms: kubelet not initialised
	I1128 00:44:49.897328   45269 retry.go:31] will retry after 462.959721ms: kubelet not initialised
	I1128 00:44:50.368260   45269 retry.go:31] will retry after 930.99638ms: kubelet not initialised
	I1128 00:44:51.303993   45269 retry.go:31] will retry after 1.275477572s: kubelet not initialised
	I1128 00:44:48.984387   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:51.482900   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:49.916072   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.415253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.626182   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.626822   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:56.627881   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.584797   45269 retry.go:31] will retry after 2.542158001s: kubelet not initialised
	I1128 00:44:55.132600   45269 retry.go:31] will retry after 1.850404606s: kubelet not initialised
	I1128 00:44:56.987924   45269 retry.go:31] will retry after 2.371310185s: kubelet not initialised
	I1128 00:44:53.483557   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:55.982236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.916135   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:57.415818   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.127409   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:01.629561   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.366141   45269 retry.go:31] will retry after 8.068803464s: kubelet not initialised
	I1128 00:44:57.983189   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:00.482336   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.483708   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.915991   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.414672   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.127296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.127766   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.484008   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.983257   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.415147   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.914282   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.128322   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:10.627792   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:07.439538   45269 retry.go:31] will retry after 10.31431504s: kubelet not initialised
	I1128 00:45:08.985186   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.481933   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.914385   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.414899   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:12.628874   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:14.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.126592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.487653   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.983710   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.915497   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.915686   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:18.416396   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:19.127337   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:21.128352   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.759682   45269 retry.go:31] will retry after 12.137072248s: kubelet not initialised
	I1128 00:45:18.482187   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.982360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.915228   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.918669   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:23.630252   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.128326   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.982597   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:24.983348   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.985418   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:25.415620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:27.914150   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:28.626533   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:30.633655   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.902379   45269 kubeadm.go:787] kubelet initialised
	I1128 00:45:29.902403   45269 kubeadm.go:788] duration metric: took 40.608931816s waiting for restarted kubelet to initialise ...
	I1128 00:45:29.902410   45269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:45:29.908442   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914018   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.914055   45269 pod_ready.go:81] duration metric: took 5.584146ms waiting for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914069   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918699   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.918720   45269 pod_ready.go:81] duration metric: took 4.644035ms waiting for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918729   45269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922818   45269 pod_ready.go:92] pod "etcd-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.922837   45269 pod_ready.go:81] duration metric: took 4.102217ms waiting for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922846   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927182   45269 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.927208   45269 pod_ready.go:81] duration metric: took 4.354519ms waiting for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927220   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301553   45269 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.301583   45269 pod_ready.go:81] duration metric: took 374.352863ms waiting for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301611   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700858   45269 pod_ready.go:92] pod "kube-proxy-jqrks" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.700879   45269 pod_ready.go:81] duration metric: took 399.260896ms waiting for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700890   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103319   45269 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:31.103340   45269 pod_ready.go:81] duration metric: took 402.442769ms waiting for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103349   45269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.482088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:31.483235   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.915117   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:32.416142   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.127196   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.127500   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.128846   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.422466   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.908596   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.983360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.983776   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:34.417575   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:36.915005   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.627473   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.126292   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.908783   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.909842   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.910185   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:38.481697   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:40.481935   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.483458   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.415244   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.127088   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.128254   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.409802   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.415828   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.986515   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:47.483162   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.414253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.416386   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.628705   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:51.130754   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.908171   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.910974   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:49.985617   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:52.483720   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.915063   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.915382   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.414813   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.627668   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.409415   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.420993   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:54.983055   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:56.983251   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.919627   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.415481   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.129666   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.629368   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:57.910151   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.408805   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:59.485375   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:01.983754   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.413478   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.129933   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.627697   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:02.410888   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.910323   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.482593   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:06.981922   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.414437   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.415659   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.628741   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.126717   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:12.127246   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.408374   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.411381   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.416658   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:08.982790   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.984134   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.914828   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.915812   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.135673   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.626139   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.909480   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.409873   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.481792   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:15.482823   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.416315   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.914123   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.628828   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.131592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.411060   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.910071   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:17.983098   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.482047   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:22.483266   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:19.413826   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.415442   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.626664   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.626823   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.424355   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.908255   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:24.984606   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.482265   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.915227   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:26.417059   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.628773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.126818   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.911487   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.409652   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:29.485507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.983913   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:28.916438   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.415565   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.626887   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.628401   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.128691   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.910776   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.421469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.482605   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:36.982844   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:33.913533   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.914337   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.914708   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.627072   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.627591   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.908233   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.910199   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:38.983620   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.482862   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.914965   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.915003   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.628492   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.127393   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:42.408895   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:44.409264   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.909077   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.483111   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:45.483236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.916039   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.415407   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.627253   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.127503   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.418512   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.427899   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:47.982977   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.983264   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.483168   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.914124   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:50.915620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.919567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.627296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.627334   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.908531   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:56.408610   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:54.983084   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.481889   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.414154   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.416518   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.126605   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.127372   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:02.127896   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.410152   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.910206   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.482177   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.982997   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.915381   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.915574   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.626760   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.628849   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.417243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.417887   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.983490   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.984161   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.414677   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.420179   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:09.127843   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:11.626987   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:07.908838   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.408385   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.482404   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.484146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.914093   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.922145   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.417231   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.627586   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.628294   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.410728   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.910177   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:16.910469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.982123   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.984037   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:17.483771   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.915323   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.415070   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.129617   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.628266   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.423065   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:21.908978   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.983122   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.482857   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.415232   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.915218   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.129285   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:25.627839   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.910794   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:26.409956   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.985146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.482512   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.916041   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.415836   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.627978   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.127213   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:32.127569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:28.413035   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.909092   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.483528   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.983745   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.913604   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.914567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.129952   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.626951   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:33.414345   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:35.414559   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.481916   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.482024   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.413520   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.414517   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.416081   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.627773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:41.126690   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:37.414665   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:39.908876   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.482323   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.983125   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.914615   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.415528   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.128692   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.627228   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:42.412788   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:44.909732   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:46.910133   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.482424   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.482507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.482562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.416841   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.914229   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:48.127074   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.627355   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.411030   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.420657   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.483765   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.982325   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.414235   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.414715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.627557   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:54.628111   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:57.129482   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.910232   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.409320   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.795074   45580 pod_ready.go:81] duration metric: took 4m0.000752019s waiting for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	E1128 00:47:53.795108   45580 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:47:53.795124   45580 pod_ready.go:38] duration metric: took 4m9.844437599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:47:53.795148   45580 kubeadm.go:640] restartCluster took 4m29.759592783s
	W1128 00:47:53.795209   45580 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:47:53.795237   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:47:54.416610   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.915781   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:59.129569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.627046   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.409599   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:00.409906   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.916155   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.416966   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:03.627676   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.126607   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:02.410451   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:04.411074   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.912243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.609428   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.814163406s)
	I1128 00:48:07.609508   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:07.624300   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:07.634606   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:07.644733   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:07.644802   45580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:03.915780   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.416602   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:08.128657   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:10.629487   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:09.411193   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.908147   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.867577   45580 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:08.915404   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.416668   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.129233   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.630498   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.909762   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:16.409160   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.916628   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.916715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:17.917022   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.126081   45580 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 00:48:19.126157   45580 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:19.126245   45580 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:19.126356   45580 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:19.126476   45580 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:19.126544   45580 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:19.128354   45580 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:19.128461   45580 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:19.128546   45580 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:19.128664   45580 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:19.128807   45580 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:19.128927   45580 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:19.129001   45580 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:19.129100   45580 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:19.129175   45580 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:19.129275   45580 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:19.129387   45580 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:19.129432   45580 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:19.129501   45580 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:19.129559   45580 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:19.129616   45580 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:19.129696   45580 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:19.129760   45580 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:19.129853   45580 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:19.129921   45580 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:19.131350   45580 out.go:204]   - Booting up control plane ...
	I1128 00:48:19.131462   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:19.131578   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:19.131674   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:19.131798   45580 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:19.131914   45580 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:19.131972   45580 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:19.132149   45580 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:19.132245   45580 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502916 seconds
	I1128 00:48:19.132388   45580 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:19.132540   45580 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:19.132619   45580 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:19.132850   45580 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-304541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:19.132959   45580 kubeadm.go:322] [bootstrap-token] Using token: tbyyd7.r005gkl9z2ll5pno
	I1128 00:48:19.134488   45580 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:19.134603   45580 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:19.134691   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:19.134841   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:19.135030   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:19.135200   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:19.135311   45580 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:19.135453   45580 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:19.135532   45580 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:19.135600   45580 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:19.135611   45580 kubeadm.go:322] 
	I1128 00:48:19.135692   45580 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:19.135700   45580 kubeadm.go:322] 
	I1128 00:48:19.135798   45580 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:19.135807   45580 kubeadm.go:322] 
	I1128 00:48:19.135840   45580 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:19.135916   45580 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:19.135987   45580 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:19.135996   45580 kubeadm.go:322] 
	I1128 00:48:19.136074   45580 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:19.136084   45580 kubeadm.go:322] 
	I1128 00:48:19.136153   45580 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:19.136161   45580 kubeadm.go:322] 
	I1128 00:48:19.136231   45580 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:19.136329   45580 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:19.136439   45580 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:19.136448   45580 kubeadm.go:322] 
	I1128 00:48:19.136552   45580 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:19.136662   45580 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:19.136674   45580 kubeadm.go:322] 
	I1128 00:48:19.136766   45580 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.136878   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:19.136907   45580 kubeadm.go:322] 	--control-plane 
	I1128 00:48:19.136913   45580 kubeadm.go:322] 
	I1128 00:48:19.136986   45580 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:19.136998   45580 kubeadm.go:322] 
	I1128 00:48:19.137097   45580 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.137259   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:19.137282   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:48:19.137290   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:19.138890   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:18.126502   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.131785   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:18.410659   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.910338   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.140172   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:19.160540   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:19.224333   45580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:19.224409   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.224455   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=embed-certs-304541 minikube.k8s.io/updated_at=2023_11_28T00_48_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.301346   45580 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:19.544274   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.656215   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.257645   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.757476   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.257246   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.757278   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.256655   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.757282   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.415051   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.914901   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.627184   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:24.627388   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.127311   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.409417   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:25.909086   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.257594   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:23.757135   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.257396   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.757508   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.257426   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.756605   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.256768   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.756656   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.256783   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.756856   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.414964   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.415763   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:28.257005   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:28.756875   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.256833   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.757261   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.257313   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.756918   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.257535   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.757356   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.917284   45580 kubeadm.go:1081] duration metric: took 12.692941702s to wait for elevateKubeSystemPrivileges.
	I1128 00:48:31.917326   45580 kubeadm.go:406] StartCluster complete in 5m7.933075195s
	I1128 00:48:31.917353   45580 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.917430   45580 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:48:31.919940   45580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.920855   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:48:31.921063   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:48:31.921037   45580 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:48:31.921110   45580 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-304541"
	I1128 00:48:31.921123   45580 addons.go:69] Setting default-storageclass=true in profile "embed-certs-304541"
	I1128 00:48:31.921143   45580 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-304541"
	I1128 00:48:31.921148   45580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-304541"
	W1128 00:48:31.921152   45580 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:48:31.921116   45580 addons.go:69] Setting metrics-server=true in profile "embed-certs-304541"
	I1128 00:48:31.921213   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921220   45580 addons.go:231] Setting addon metrics-server=true in "embed-certs-304541"
	W1128 00:48:31.921229   45580 addons.go:240] addon metrics-server should already be in state true
	I1128 00:48:31.921265   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921531   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921545   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921578   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921584   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921594   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921605   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.941345   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I1128 00:48:31.941374   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I1128 00:48:31.941359   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I1128 00:48:31.942009   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942040   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942449   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942460   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942477   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942488   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942549   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942937   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942955   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.943129   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943134   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943300   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943646   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.943671   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.943774   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.944439   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.944470   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.947579   45580 addons.go:231] Setting addon default-storageclass=true in "embed-certs-304541"
	W1128 00:48:31.947605   45580 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:48:31.947635   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.948083   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.948114   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.964906   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1128 00:48:31.964942   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1128 00:48:31.966157   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966261   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966778   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966795   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.966980   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966999   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.967444   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967481   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I1128 00:48:31.967447   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967636   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968331   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.968434   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968812   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.968830   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.969729   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972519   45580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:48:31.970271   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972982   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.974461   45580 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:31.974479   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:48:31.974498   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.976187   45580 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:48:31.974991   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.977660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.977907   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:48:31.977925   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:48:31.977943   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.978001   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.978243   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.978264   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.978506   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.978727   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.978954   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.979170   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.980878   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981226   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.981262   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981399   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.981571   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.981690   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.981810   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.997812   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I1128 00:48:31.998404   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.998989   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.999016   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.999427   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.999652   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:32.001212   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:32.001482   45580 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.001496   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:48:32.001513   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:32.002981   45580 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-304541" context rescaled to 1 replicas
	I1128 00:48:32.003019   45580 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:48:32.005961   45580 out.go:177] * Verifying Kubernetes components...
	I1128 00:48:29.127403   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:31.127830   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.911587   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.411923   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.004640   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.005211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:32.007586   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:32.007585   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:32.007700   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.007722   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:32.007894   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:32.008049   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:32.213297   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:48:32.213322   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:48:32.255646   45580 node_ready.go:35] waiting up to 6m0s for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.255743   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:48:32.268542   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.270044   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:48:32.270066   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:48:32.304458   45580 node_ready.go:49] node "embed-certs-304541" has status "Ready":"True"
	I1128 00:48:32.304486   45580 node_ready.go:38] duration metric: took 48.802082ms waiting for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.304498   45580 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:32.320550   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:32.437814   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:32.437852   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:48:32.462274   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:32.541622   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:29.418692   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.455152   45815 pod_ready.go:81] duration metric: took 4m0.000108261s waiting for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:30.455199   45815 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:30.455216   45815 pod_ready.go:38] duration metric: took 4m12.906382743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:30.455251   45815 kubeadm.go:640] restartCluster took 4m33.513232005s
	W1128 00:48:30.455312   45815 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:48:30.455356   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:48:34.327113   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.071322786s)
	I1128 00:48:34.327155   45580 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 00:48:34.342711   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.074127133s)
	I1128 00:48:34.342776   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.342791   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.343284   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343328   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.343339   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.343348   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343581   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343598   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.366719   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.366754   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.367052   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.367104   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.367119   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.467705   45580 pod_ready.go:102] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.935662   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473338078s)
	I1128 00:48:34.935745   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.935814   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936143   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.936184   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936193   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.936203   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.936211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936435   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936482   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977248   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.435573064s)
	I1128 00:48:34.977318   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977345   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.977738   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.977785   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.977806   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977824   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.979823   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.979823   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.979849   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.979860   45580 addons.go:467] Verifying addon metrics-server=true in "embed-certs-304541"
	I1128 00:48:34.981768   45580 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:48:33.129597   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.129880   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.912875   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.411225   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.983440   45580 addons.go:502] enable addons completed in 3.062399778s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:48:36.495977   45580 pod_ready.go:92] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.496002   45580 pod_ready.go:81] duration metric: took 4.175421265s waiting for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.496012   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508269   45580 pod_ready.go:92] pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.508293   45580 pod_ready.go:81] duration metric: took 12.274473ms waiting for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508302   45580 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515826   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.515855   45580 pod_ready.go:81] duration metric: took 7.545794ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515873   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523206   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.523271   45580 pod_ready.go:81] duration metric: took 7.388614ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523286   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529859   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.529881   45580 pod_ready.go:81] duration metric: took 6.58575ms waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529889   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857435   45580 pod_ready.go:92] pod "kube-proxy-w5ct2" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.857467   45580 pod_ready.go:81] duration metric: took 327.570428ms waiting for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857481   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257433   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:37.257455   45580 pod_ready.go:81] duration metric: took 399.966903ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257462   45580 pod_ready.go:38] duration metric: took 4.952954771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:37.257476   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:37.257523   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:37.275627   45580 api_server.go:72] duration metric: took 5.272574466s to wait for apiserver process to appear ...
	I1128 00:48:37.275656   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:37.275673   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:48:37.283884   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:48:37.285716   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:37.285744   45580 api_server.go:131] duration metric: took 10.080776ms to wait for apiserver health ...
	I1128 00:48:37.285766   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:37.460530   45580 system_pods.go:59] 9 kube-system pods found
	I1128 00:48:37.460555   45580 system_pods.go:61] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.460560   45580 system_pods.go:61] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.460563   45580 system_pods.go:61] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.460568   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.460572   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.460575   45580 system_pods.go:61] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.460579   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.460585   45580 system_pods.go:61] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.460589   45580 system_pods.go:61] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.460597   45580 system_pods.go:74] duration metric: took 174.824783ms to wait for pod list to return data ...
	I1128 00:48:37.460619   45580 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:37.656404   45580 default_sa.go:45] found service account: "default"
	I1128 00:48:37.656431   45580 default_sa.go:55] duration metric: took 195.805836ms for default service account to be created ...
	I1128 00:48:37.656444   45580 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:37.861049   45580 system_pods.go:86] 9 kube-system pods found
	I1128 00:48:37.861086   45580 system_pods.go:89] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.861095   45580 system_pods.go:89] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.861101   45580 system_pods.go:89] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.861108   45580 system_pods.go:89] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.861116   45580 system_pods.go:89] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.861122   45580 system_pods.go:89] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.861128   45580 system_pods.go:89] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.861140   45580 system_pods.go:89] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.861157   45580 system_pods.go:89] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.861171   45580 system_pods.go:126] duration metric: took 204.720501ms to wait for k8s-apps to be running ...
	I1128 00:48:37.861187   45580 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:37.861241   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:37.875344   45580 system_svc.go:56] duration metric: took 14.150294ms WaitForService to wait for kubelet.
	I1128 00:48:37.875380   45580 kubeadm.go:581] duration metric: took 5.872335245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:37.875407   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:38.057075   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:38.057106   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:38.057117   45580 node_conditions.go:105] duration metric: took 181.705529ms to run NodePressure ...
	I1128 00:48:38.057127   45580 start.go:228] waiting for startup goroutines ...
	I1128 00:48:38.057133   45580 start.go:233] waiting for cluster config update ...
	I1128 00:48:38.057141   45580 start.go:242] writing updated cluster config ...
	I1128 00:48:38.057366   45580 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:38.107014   45580 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:38.109071   45580 out.go:177] * Done! kubectl is now configured to use "embed-certs-304541" cluster and "default" namespace by default
	I1128 00:48:37.626062   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:38.819130   46126 pod_ready.go:81] duration metric: took 4m0.000531461s waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:38.819159   46126 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:38.819168   46126 pod_ready.go:38] duration metric: took 4m5.602220781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:38.819189   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:38.819216   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:38.819269   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:38.882052   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:38.882075   46126 cri.go:89] found id: ""
	I1128 00:48:38.882084   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:38.882143   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.886688   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:38.886751   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:38.926163   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:38.926190   46126 cri.go:89] found id: ""
	I1128 00:48:38.926197   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:38.926259   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.930505   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:38.930558   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:38.979793   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:38.979816   46126 cri.go:89] found id: ""
	I1128 00:48:38.979823   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:38.979876   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.984146   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:38.984244   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:39.033485   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:39.033509   46126 cri.go:89] found id: ""
	I1128 00:48:39.033519   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:39.033575   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.038977   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:39.039038   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:39.079669   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:39.079697   46126 cri.go:89] found id: ""
	I1128 00:48:39.079707   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:39.079767   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.084447   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:39.084515   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:39.121494   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:39.121523   46126 cri.go:89] found id: ""
	I1128 00:48:39.121533   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:39.121594   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.126495   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:39.126554   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:39.168822   46126 cri.go:89] found id: ""
	I1128 00:48:39.168851   46126 logs.go:284] 0 containers: []
	W1128 00:48:39.168862   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:39.168869   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:39.168924   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:39.213834   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.213859   46126 cri.go:89] found id: ""
	I1128 00:48:39.213869   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:39.213914   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.218746   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:39.218772   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:39.232098   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:39.232127   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:39.373641   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:39.373674   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:39.451311   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:39.451349   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.498219   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:39.498247   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:39.952276   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:39.952314   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:40.008385   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:40.008425   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:40.052409   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:40.052443   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:40.092943   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:40.092978   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:40.135490   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:40.135520   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:40.189756   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:40.189793   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:40.242615   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:40.242643   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:37.415898   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:39.910954   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:42.802428   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:42.818606   46126 api_server.go:72] duration metric: took 4m14.508070703s to wait for apiserver process to appear ...
	I1128 00:48:42.818632   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:42.818667   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:42.818721   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:42.872566   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:42.872603   46126 cri.go:89] found id: ""
	I1128 00:48:42.872613   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:42.872675   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.878165   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:42.878232   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:42.924667   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:42.924689   46126 cri.go:89] found id: ""
	I1128 00:48:42.924699   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:42.924772   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.929748   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:42.929809   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:42.977787   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:42.977815   46126 cri.go:89] found id: ""
	I1128 00:48:42.977825   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:42.977887   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.982991   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:42.983071   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:43.032835   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.032866   46126 cri.go:89] found id: ""
	I1128 00:48:43.032876   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:43.032933   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.038635   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:43.038711   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:43.084051   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.084080   46126 cri.go:89] found id: ""
	I1128 00:48:43.084090   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:43.084161   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.088908   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:43.088976   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:43.130640   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.130666   46126 cri.go:89] found id: ""
	I1128 00:48:43.130676   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:43.130738   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.135354   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:43.135434   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:43.179655   46126 cri.go:89] found id: ""
	I1128 00:48:43.179690   46126 logs.go:284] 0 containers: []
	W1128 00:48:43.179699   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:43.179705   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:43.179770   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:43.228309   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.228335   46126 cri.go:89] found id: ""
	I1128 00:48:43.228343   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:43.228404   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.233343   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:43.233375   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:43.247396   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:43.247430   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:43.386131   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:43.386181   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:43.463228   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:43.463275   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.519469   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:43.519511   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.581402   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:43.581437   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:43.641804   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:43.641844   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:43.707768   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:43.707807   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:43.779636   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:43.779673   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:43.822939   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:43.822972   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.869304   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:43.869344   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.917500   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:43.917528   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:46.886551   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:48:46.892696   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:48:46.894400   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:46.894424   46126 api_server.go:131] duration metric: took 4.075784232s to wait for apiserver health ...
	I1128 00:48:46.894433   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:46.894455   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:46.894492   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:46.939259   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:46.939291   46126 cri.go:89] found id: ""
	I1128 00:48:46.939302   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:46.939364   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.946934   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:46.947012   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:46.989896   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:46.989920   46126 cri.go:89] found id: ""
	I1128 00:48:46.989930   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:46.989988   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.994923   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:46.994994   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:47.040298   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.040330   46126 cri.go:89] found id: ""
	I1128 00:48:47.040339   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:47.040396   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.045041   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:47.045113   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:47.093254   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.093282   46126 cri.go:89] found id: ""
	I1128 00:48:47.093290   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:47.093345   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.097856   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:47.097916   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:47.150763   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.150790   46126 cri.go:89] found id: ""
	I1128 00:48:47.150800   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:47.150855   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.155272   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:47.155348   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:47.203549   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.203586   46126 cri.go:89] found id: ""
	I1128 00:48:47.203600   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:47.203670   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.209313   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:47.209384   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:42.410241   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:44.909607   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:46.893894   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.438515297s)
	I1128 00:48:46.893965   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:46.909967   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:46.919457   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:46.928580   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:46.928629   45815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:46.989655   45815 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 00:48:46.989772   45815 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:47.162717   45815 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:47.162868   45815 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:47.163002   45815 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:47.453392   45815 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:47.455125   45815 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:47.455291   45815 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:47.455388   45815 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:47.455530   45815 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:47.455605   45815 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:47.456116   45815 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:47.456786   45815 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:47.457320   45815 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:47.457814   45815 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:47.458228   45815 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:47.458584   45815 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:47.458984   45815 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:47.459080   45815 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:47.654823   45815 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:47.858053   45815 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 00:48:48.006981   45815 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:48.256244   45815 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:48.381440   45815 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:48.381976   45815 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:48.384696   45815 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:48.386824   45815 out.go:204]   - Booting up control plane ...
	I1128 00:48:48.386943   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:48.387057   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:48.387155   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:48.404036   45815 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:48.408139   45815 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:48.408584   45815 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:48.539731   45815 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:47.259312   46126 cri.go:89] found id: ""
	I1128 00:48:47.259343   46126 logs.go:284] 0 containers: []
	W1128 00:48:47.259353   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:47.259361   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:47.259421   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:47.308650   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.308681   46126 cri.go:89] found id: ""
	I1128 00:48:47.308692   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:47.308764   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.313702   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:47.313727   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:47.327753   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:47.327788   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:47.490493   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:47.490525   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:47.554064   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:47.554097   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.604401   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:47.604433   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.643173   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:47.643211   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.707400   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:47.707432   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.763831   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:47.763860   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:47.817244   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:47.817278   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:47.872462   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:47.872499   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:47.930695   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:47.930729   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.987718   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:47.987748   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:50.856470   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:48:50.856510   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.856518   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.856525   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.856533   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.856539   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.856545   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.856558   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.856571   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.856579   46126 system_pods.go:74] duration metric: took 3.962140088s to wait for pod list to return data ...
	I1128 00:48:50.856589   46126 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:50.859308   46126 default_sa.go:45] found service account: "default"
	I1128 00:48:50.859338   46126 default_sa.go:55] duration metric: took 2.741136ms for default service account to be created ...
	I1128 00:48:50.859347   46126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:50.865347   46126 system_pods.go:86] 8 kube-system pods found
	I1128 00:48:50.865371   46126 system_pods.go:89] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.865377   46126 system_pods.go:89] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.865382   46126 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.865387   46126 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.865391   46126 system_pods.go:89] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.865395   46126 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.865405   46126 system_pods.go:89] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.865413   46126 system_pods.go:89] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.865425   46126 system_pods.go:126] duration metric: took 6.071837ms to wait for k8s-apps to be running ...
	I1128 00:48:50.865441   46126 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:50.865490   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:50.882729   46126 system_svc.go:56] duration metric: took 17.277766ms WaitForService to wait for kubelet.
	I1128 00:48:50.882767   46126 kubeadm.go:581] duration metric: took 4m22.572235871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:50.882796   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:50.886638   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:50.886671   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:50.886684   46126 node_conditions.go:105] duration metric: took 3.881703ms to run NodePressure ...
	I1128 00:48:50.886699   46126 start.go:228] waiting for startup goroutines ...
	I1128 00:48:50.886712   46126 start.go:233] waiting for cluster config update ...
	I1128 00:48:50.886725   46126 start.go:242] writing updated cluster config ...
	I1128 00:48:50.886995   46126 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:50.947562   46126 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:50.949119   46126 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-488423" cluster and "default" namespace by default
	I1128 00:48:47.419653   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:49.909410   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:51.909739   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:53.910387   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.408786   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.542000   45815 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002009 seconds
	I1128 00:48:56.567203   45815 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:56.583239   45815 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:57.114661   45815 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:57.114917   45815 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-473615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:57.633030   45815 kubeadm.go:322] [bootstrap-token] Using token: vz7ey4.v2qfoncp2ok7nh54
	I1128 00:48:57.634835   45815 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:57.634961   45815 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:57.640535   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:57.653911   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:57.658740   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:57.662927   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:57.667238   45815 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:57.688281   45815 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:57.949630   45815 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:58.055744   45815 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:58.057024   45815 kubeadm.go:322] 
	I1128 00:48:58.057159   45815 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:58.057179   45815 kubeadm.go:322] 
	I1128 00:48:58.057290   45815 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:58.057310   45815 kubeadm.go:322] 
	I1128 00:48:58.057343   45815 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:58.057431   45815 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:58.057518   45815 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:58.057536   45815 kubeadm.go:322] 
	I1128 00:48:58.057601   45815 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:58.057609   45815 kubeadm.go:322] 
	I1128 00:48:58.057673   45815 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:58.057678   45815 kubeadm.go:322] 
	I1128 00:48:58.057719   45815 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:58.057787   45815 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:58.057841   45815 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:58.057844   45815 kubeadm.go:322] 
	I1128 00:48:58.057921   45815 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:58.057987   45815 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:58.057991   45815 kubeadm.go:322] 
	I1128 00:48:58.058062   45815 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058148   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:58.058183   45815 kubeadm.go:322] 	--control-plane 
	I1128 00:48:58.058198   45815 kubeadm.go:322] 
	I1128 00:48:58.058266   45815 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:58.058272   45815 kubeadm.go:322] 
	I1128 00:48:58.058347   45815 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058449   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:58.059375   45815 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:58.059404   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:48:58.059415   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:58.061524   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:58.062981   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:58.121061   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:58.143978   45815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:58.144060   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.144068   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=no-preload-473615 minikube.k8s.io/updated_at=2023_11_28T00_48_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.495592   45815 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:58.495756   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.590073   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.412254   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:00.912329   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:59.189174   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:59.688440   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.189285   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.688724   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.189197   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.688512   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.189219   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.689235   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.189405   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.689243   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.414190   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:05.909164   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:04.188645   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:04.688928   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.189330   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.689126   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.189257   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.688476   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.189386   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.689051   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.188961   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.689080   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.188591   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.688502   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.188492   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.303728   45815 kubeadm.go:1081] duration metric: took 12.159747313s to wait for elevateKubeSystemPrivileges.
	I1128 00:49:10.303773   45815 kubeadm.go:406] StartCluster complete in 5m13.413969558s
	I1128 00:49:10.303794   45815 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.303880   45815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:49:10.306274   45815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.306559   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:49:10.306678   45815 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:49:10.306764   45815 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473615"
	I1128 00:49:10.306786   45815 addons.go:231] Setting addon storage-provisioner=true in "no-preload-473615"
	W1128 00:49:10.306799   45815 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:49:10.306822   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:49:10.306844   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.306903   45815 addons.go:69] Setting default-storageclass=true in profile "no-preload-473615"
	I1128 00:49:10.306924   45815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473615"
	I1128 00:49:10.307065   45815 addons.go:69] Setting metrics-server=true in profile "no-preload-473615"
	I1128 00:49:10.307089   45815 addons.go:231] Setting addon metrics-server=true in "no-preload-473615"
	W1128 00:49:10.307097   45815 addons.go:240] addon metrics-server should already be in state true
	I1128 00:49:10.307140   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.307283   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307284   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307366   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307313   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307600   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307650   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.323788   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1128 00:49:10.324333   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.324915   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.324940   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.325212   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I1128 00:49:10.325655   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.325825   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326138   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.326156   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.326346   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326375   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.326504   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326968   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326991   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.330263   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I1128 00:49:10.331124   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.331538   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.331559   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.331951   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.332131   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.335360   45815 addons.go:231] Setting addon default-storageclass=true in "no-preload-473615"
	W1128 00:49:10.335378   45815 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:49:10.335405   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.335685   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.335715   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.346750   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1128 00:49:10.346822   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I1128 00:49:10.347279   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347400   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347703   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347731   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347906   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347919   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347983   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348096   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.348232   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348429   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.350025   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.352544   45815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:49:10.350506   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.355541   45815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:49:10.354491   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:49:10.356963   45815 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.356980   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:49:10.356993   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.355570   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:49:10.357068   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.356139   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1128 00:49:10.356295   45815 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473615" context rescaled to 1 replicas
	I1128 00:49:10.357149   45815 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:49:10.358543   45815 out.go:177] * Verifying Kubernetes components...
	I1128 00:49:10.359926   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:10.357719   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.360555   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.360575   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.361020   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.361318   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361551   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.361574   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361736   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.361938   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.362037   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362129   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.362295   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.362317   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.362381   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.362676   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.362699   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362961   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.363188   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.363360   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.363499   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.381194   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1128 00:49:10.381543   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.382012   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.382032   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.382399   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.382584   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.384269   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.384500   45815 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.384513   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:49:10.384527   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.387448   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388000   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.388027   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388169   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.388335   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.388477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.388578   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.513157   45815 node_ready.go:35] waiting up to 6m0s for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.513251   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:49:10.546158   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.566225   45815 node_ready.go:49] node "no-preload-473615" has status "Ready":"True"
	I1128 00:49:10.566248   45815 node_ready.go:38] duration metric: took 53.063342ms waiting for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.566259   45815 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:10.589374   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:49:10.589400   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:49:10.608085   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.657717   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:49:10.657746   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:49:10.693300   45815 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.745796   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.745821   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:49:10.820139   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.848411   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:10.848444   45815 pod_ready.go:81] duration metric: took 155.116855ms waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.848459   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035904   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.035929   45815 pod_ready.go:81] duration metric: took 187.461745ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035941   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.269000   45815 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1128 00:49:11.634167   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.087967346s)
	I1128 00:49:11.634213   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026096699s)
	I1128 00:49:11.634226   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634239   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634250   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634272   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634578   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634621   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634637   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634639   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634649   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634650   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634656   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634660   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634595   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634942   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634958   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634986   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635009   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634989   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635049   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.657473   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.657495   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.657814   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.657828   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.758491   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.758514   45815 pod_ready.go:81] duration metric: took 722.565796ms waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.758525   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:12.084449   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.264259029s)
	I1128 00:49:12.084510   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084524   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.084846   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.084865   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.084875   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084870   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.084885   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.085142   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.085152   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.085164   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.085174   45815 addons.go:467] Verifying addon metrics-server=true in "no-preload-473615"
	I1128 00:49:12.087081   45815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:49:08.409321   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:10.909836   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:12.088572   45815 addons.go:502] enable addons completed in 1.781896775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:49:13.830651   45815 pod_ready.go:102] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:14.830780   45815 pod_ready.go:92] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.830805   45815 pod_ready.go:81] duration metric: took 3.072274458s waiting for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.830815   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836248   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.836266   45815 pod_ready.go:81] duration metric: took 5.444378ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836273   45815 pod_ready.go:38] duration metric: took 4.270002588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:14.836288   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:49:14.836329   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:49:14.860322   45815 api_server.go:72] duration metric: took 4.503144983s to wait for apiserver process to appear ...
	I1128 00:49:14.860354   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:49:14.860375   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:49:14.866977   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:49:14.868294   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:49:14.868318   45815 api_server.go:131] duration metric: took 7.955565ms to wait for apiserver health ...
	I1128 00:49:14.868328   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:49:14.875943   45815 system_pods.go:59] 8 kube-system pods found
	I1128 00:49:14.875972   45815 system_pods.go:61] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:14.875979   45815 system_pods.go:61] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:14.875986   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:14.875993   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:14.875999   45815 system_pods.go:61] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:14.876005   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:14.876019   45815 system_pods.go:61] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:14.876031   45815 system_pods.go:61] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:14.876042   45815 system_pods.go:74] duration metric: took 7.70749ms to wait for pod list to return data ...
	I1128 00:49:14.876058   45815 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:49:14.918080   45815 default_sa.go:45] found service account: "default"
	I1128 00:49:14.918107   45815 default_sa.go:55] duration metric: took 42.036279ms for default service account to be created ...
	I1128 00:49:14.918119   45815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:49:15.120338   45815 system_pods.go:86] 8 kube-system pods found
	I1128 00:49:15.120368   45815 system_pods.go:89] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:15.120376   45815 system_pods.go:89] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:15.120383   45815 system_pods.go:89] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:15.120390   45815 system_pods.go:89] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:15.120395   45815 system_pods.go:89] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:15.120401   45815 system_pods.go:89] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:15.120413   45815 system_pods.go:89] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:15.120420   45815 system_pods.go:89] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:15.120437   45815 system_pods.go:126] duration metric: took 202.310611ms to wait for k8s-apps to be running ...
	I1128 00:49:15.120452   45815 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:49:15.120501   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:15.134858   45815 system_svc.go:56] duration metric: took 14.396652ms WaitForService to wait for kubelet.
	I1128 00:49:15.134886   45815 kubeadm.go:581] duration metric: took 4.777716544s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:49:15.134902   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:49:15.318344   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:49:15.318370   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:49:15.318380   45815 node_conditions.go:105] duration metric: took 183.473974ms to run NodePressure ...
	I1128 00:49:15.318390   45815 start.go:228] waiting for startup goroutines ...
	I1128 00:49:15.318396   45815 start.go:233] waiting for cluster config update ...
	I1128 00:49:15.318405   45815 start.go:242] writing updated cluster config ...
	I1128 00:49:15.318651   45815 ssh_runner.go:195] Run: rm -f paused
	I1128 00:49:15.368036   45815 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 00:49:15.369853   45815 out.go:177] * Done! kubectl is now configured to use "no-preload-473615" cluster and "default" namespace by default
	I1128 00:49:12.909910   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:15.420062   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:17.421038   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:19.909444   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:21.910293   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:24.412962   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:26.908733   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:28.910353   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:31.104114   45269 pod_ready.go:81] duration metric: took 4m0.000750315s waiting for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	E1128 00:49:31.104164   45269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:49:31.104219   45269 pod_ready.go:38] duration metric: took 4m1.201800344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:31.104258   45269 kubeadm.go:640] restartCluster took 5m3.38216869s
	W1128 00:49:31.104338   45269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:49:31.104371   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:49:35.883236   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.778829992s)
	I1128 00:49:35.883312   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:35.898846   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:49:35.910716   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:49:35.921838   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:49:35.921883   45269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 00:49:35.987683   45269 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 00:49:35.987889   45269 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:49:36.153771   45269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:49:36.153926   45269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:49:36.154056   45269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:49:36.387112   45269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:49:36.387236   45269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:49:36.394929   45269 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 00:49:36.523951   45269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:49:36.526180   45269 out.go:204]   - Generating certificates and keys ...
	I1128 00:49:36.526284   45269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:49:36.526378   45269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:49:36.526508   45269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:49:36.526603   45269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:49:36.526723   45269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:49:36.526807   45269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:49:36.526928   45269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:49:36.527026   45269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:49:36.527127   45269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:49:36.527671   45269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:49:36.527734   45269 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:49:36.527807   45269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:49:36.966756   45269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:49:37.138717   45269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:49:37.307916   45269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:49:37.374115   45269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:49:37.375393   45269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:49:37.377224   45269 out.go:204]   - Booting up control plane ...
	I1128 00:49:37.377338   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:49:37.381887   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:49:37.383114   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:49:37.384032   45269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:49:37.387460   45269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:49:47.893342   45269 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504508 seconds
	I1128 00:49:47.893497   45269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:49:47.911409   45269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:49:48.437988   45269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:49:48.438226   45269 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-732472 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 00:49:48.947631   45269 kubeadm.go:322] [bootstrap-token] Using token: g2kx2b.r3qu6fui94rrmu2m
	I1128 00:49:48.949581   45269 out.go:204]   - Configuring RBAC rules ...
	I1128 00:49:48.949746   45269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:49:48.960004   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:49:48.969068   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:49:48.973998   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:49:48.982331   45269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:49:49.099721   45269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:49:49.367382   45269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:49:49.369069   45269 kubeadm.go:322] 
	I1128 00:49:49.369159   45269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:49:49.369196   45269 kubeadm.go:322] 
	I1128 00:49:49.369325   45269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:49:49.369339   45269 kubeadm.go:322] 
	I1128 00:49:49.369383   45269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:49:49.369449   45269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:49:49.369519   45269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:49:49.369541   45269 kubeadm.go:322] 
	I1128 00:49:49.369619   45269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:49:49.369725   45269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:49:49.369822   45269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:49:49.369839   45269 kubeadm.go:322] 
	I1128 00:49:49.369975   45269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 00:49:49.370080   45269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:49:49.370092   45269 kubeadm.go:322] 
	I1128 00:49:49.370202   45269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370371   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:49:49.370419   45269 kubeadm.go:322]     --control-plane 	  
	I1128 00:49:49.370432   45269 kubeadm.go:322] 
	I1128 00:49:49.370515   45269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:49:49.370527   45269 kubeadm.go:322] 
	I1128 00:49:49.370639   45269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370783   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:49:49.371106   45269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:49:49.371134   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:49:49.371148   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:49:49.373008   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:49:49.374371   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:49:49.384861   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:49:49.402517   45269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:49:49.402582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.402598   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=old-k8s-version-732472 minikube.k8s.io/updated_at=2023_11_28T00_49_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.441523   45269 ops.go:34] apiserver oom_adj: -16
	I1128 00:49:49.674343   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.796920   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.420537   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.920042   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.420533   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.920538   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.420730   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.920078   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.420670   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.920876   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.420798   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.920702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.420180   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.920033   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.420702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.920106   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.420244   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.920637   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.420226   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.920874   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.420228   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.920070   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.420845   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.920883   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.420977   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.920275   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.420097   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.920582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.420001   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.919906   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.420071   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.580992   45269 kubeadm.go:1081] duration metric: took 15.178468662s to wait for elevateKubeSystemPrivileges.
	I1128 00:50:04.581023   45269 kubeadm.go:406] StartCluster complete in 5m36.912120738s
	I1128 00:50:04.581042   45269 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.581125   45269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:50:04.582704   45269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.582966   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:50:04.583000   45269 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:50:04.583077   45269 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583105   45269 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-732472"
	W1128 00:50:04.583116   45269 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:50:04.583192   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583206   45269 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583227   45269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-732472"
	I1128 00:50:04.583540   45269 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583565   45269 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-732472"
	W1128 00:50:04.583573   45269 addons.go:240] addon metrics-server should already be in state true
	I1128 00:50:04.583609   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583635   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583640   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583676   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583643   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583193   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:50:04.584015   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.584069   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.602419   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I1128 00:50:04.602558   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I1128 00:50:04.602646   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I1128 00:50:04.603020   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603118   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603196   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603571   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603572   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603597   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603611   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603729   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603753   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603939   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.603973   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604086   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.604489   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604521   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.604617   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604646   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.608900   45269 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-732472"
	W1128 00:50:04.608925   45269 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:50:04.608953   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.611555   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.611628   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.622409   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33595
	I1128 00:50:04.622446   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1128 00:50:04.622876   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623000   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623394   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623424   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623534   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623567   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623886   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624365   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624368   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.624556   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.626412   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.626443   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.629006   45269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:50:04.630723   45269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:50:04.632378   45269 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.632395   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:50:04.632409   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.630641   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:50:04.632467   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:50:04.632479   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.632126   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I1128 00:50:04.633062   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.633666   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.633692   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.634447   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.635020   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.635053   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.636332   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636387   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636733   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636772   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636795   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636830   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636952   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637085   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637132   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637245   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637296   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637413   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637448   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.637594   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.651941   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1128 00:50:04.652604   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.653192   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.653222   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.653677   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.653838   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.655532   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.655848   45269 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.655868   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:50:04.655890   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.658852   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659252   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.659280   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659426   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.659602   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.659971   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.660096   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	W1128 00:50:04.792826   45269 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-732472" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1128 00:50:04.792863   45269 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1128 00:50:04.792890   45269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:50:04.795799   45269 out.go:177] * Verifying Kubernetes components...
	I1128 00:50:04.797469   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:50:04.870889   45269 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.871024   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:50:04.888333   45269 node_ready.go:49] node "old-k8s-version-732472" has status "Ready":"True"
	I1128 00:50:04.888359   45269 node_ready.go:38] duration metric: took 17.44205ms waiting for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.888372   45269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:04.899414   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.902681   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:04.904708   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:50:04.904734   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:50:04.947930   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.977094   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:50:04.977123   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:50:05.195712   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:05.195795   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:50:05.292058   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:06.383144   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.512083846s)
	I1128 00:50:06.383170   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.483727542s)
	I1128 00:50:06.383180   45269 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 00:50:06.383208   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383572   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383599   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383608   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383606   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.383618   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383835   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383851   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383870   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.423407   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.423447   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.423758   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.423783   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.423799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.678261   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.73029562s)
	I1128 00:50:06.678312   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678326   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678640   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678655   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.678663   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678672   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678927   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678955   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762082   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46997729s)
	I1128 00:50:06.762140   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762160   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762538   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762557   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762569   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762579   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762599   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.762815   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762830   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762840   45269 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-732472"
	I1128 00:50:06.765825   45269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:50:06.767637   45269 addons.go:502] enable addons completed in 2.184637132s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:50:06.959495   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:08.961160   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:11.459984   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:12.959294   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.959317   45269 pod_ready.go:81] duration metric: took 8.056612005s waiting for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.959326   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973244   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.973268   45269 pod_ready.go:81] duration metric: took 13.936307ms waiting for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973278   45269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980471   45269 pod_ready.go:92] pod "kube-proxy-88chq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.980489   45269 pod_ready.go:81] duration metric: took 7.20414ms waiting for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980496   45269 pod_ready.go:38] duration metric: took 8.092113593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:12.980511   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:50:12.980554   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:50:12.996604   45269 api_server.go:72] duration metric: took 8.203675443s to wait for apiserver process to appear ...
	I1128 00:50:12.996645   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:50:12.996670   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:50:13.006987   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:50:13.007986   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:50:13.008003   45269 api_server.go:131] duration metric: took 11.352257ms to wait for apiserver health ...
	I1128 00:50:13.008010   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:50:13.013658   45269 system_pods.go:59] 5 kube-system pods found
	I1128 00:50:13.013677   45269 system_pods.go:61] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.013682   45269 system_pods.go:61] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.013686   45269 system_pods.go:61] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.013693   45269 system_pods.go:61] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.013697   45269 system_pods.go:61] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.013703   45269 system_pods.go:74] duration metric: took 5.688575ms to wait for pod list to return data ...
	I1128 00:50:13.013710   45269 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:50:13.016210   45269 default_sa.go:45] found service account: "default"
	I1128 00:50:13.016228   45269 default_sa.go:55] duration metric: took 2.513069ms for default service account to be created ...
	I1128 00:50:13.016234   45269 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:50:13.020464   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.020488   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.020496   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.020502   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.020513   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.020522   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.020544   45269 retry.go:31] will retry after 244.092512ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.270858   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.270893   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.270901   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.270907   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.270918   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.270926   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.270946   45269 retry.go:31] will retry after 311.602199ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.588013   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.588041   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.588047   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.588051   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.588057   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.588062   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.588076   45269 retry.go:31] will retry after 298.08088ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.891272   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.891302   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.891307   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.891311   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.891318   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.891323   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.891339   45269 retry.go:31] will retry after 474.390305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:14.371201   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:14.371230   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:14.371236   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:14.371241   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:14.371248   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:14.371253   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:14.371269   45269 retry.go:31] will retry after 719.510586ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.096817   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.096846   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.096851   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.096855   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.096862   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.096866   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.096881   45269 retry.go:31] will retry after 684.457384ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.786918   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.786947   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.786952   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.786956   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.786962   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.786967   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.786982   45269 retry.go:31] will retry after 721.543291ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:16.513230   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:16.513258   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:16.513263   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:16.513268   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:16.513275   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:16.513280   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:16.513296   45269 retry.go:31] will retry after 1.405502561s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:17.926572   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:17.926610   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:17.926619   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:17.926626   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:17.926636   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:17.926642   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:17.926662   45269 retry.go:31] will retry after 1.65088536s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:19.584099   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:19.584130   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:19.584136   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:19.584140   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:19.584147   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:19.584152   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:19.584168   45269 retry.go:31] will retry after 1.660488369s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:21.250659   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:21.250706   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:21.250714   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:21.250719   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:21.250729   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:21.250736   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:21.250757   45269 retry.go:31] will retry after 1.762203818s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:23.018771   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:23.018798   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:23.018804   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:23.018808   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:23.018815   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:23.018819   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:23.018837   45269 retry.go:31] will retry after 2.558255345s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:25.584363   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:25.584394   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:25.584402   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:25.584409   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:25.584417   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:25.584422   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:25.584446   45269 retry.go:31] will retry after 4.457632402s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:30.049343   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:30.049374   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:30.049381   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:30.049388   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:30.049398   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:30.049406   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:30.049426   45269 retry.go:31] will retry after 5.077489821s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:35.133974   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:35.134001   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:35.134006   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:35.134010   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:35.134022   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:35.134029   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:35.134048   45269 retry.go:31] will retry after 5.675627515s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:40.814779   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:40.814808   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:40.814814   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:40.814818   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:40.814825   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:40.814829   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:40.814846   45269 retry.go:31] will retry after 5.701774609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:46.524426   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:46.524467   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:46.524475   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:46.524482   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:46.524492   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:46.524499   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:46.524521   45269 retry.go:31] will retry after 7.322045517s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:53.852348   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:53.852378   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:53.852387   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:53.852394   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:53.852406   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:53.852413   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:53.852442   45269 retry.go:31] will retry after 12.532542473s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:06.392828   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:06.392858   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:06.392863   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:06.392872   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Pending
	I1128 00:51:06.392876   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Pending
	I1128 00:51:06.392882   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Pending
	I1128 00:51:06.392886   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:06.392889   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Pending
	I1128 00:51:06.392897   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:06.392901   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:06.392915   45269 retry.go:31] will retry after 10.519018157s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:16.918264   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:16.918303   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:16.918311   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:16.918319   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Running
	I1128 00:51:16.918326   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Running
	I1128 00:51:16.918333   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Running
	I1128 00:51:16.918340   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:16.918346   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Running
	I1128 00:51:16.918360   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:16.918375   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:16.918386   45269 system_pods.go:126] duration metric: took 1m3.902146285s to wait for k8s-apps to be running ...
	I1128 00:51:16.918398   45269 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:51:16.918445   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:51:16.937522   45269 system_svc.go:56] duration metric: took 19.116204ms WaitForService to wait for kubelet.
	I1128 00:51:16.937556   45269 kubeadm.go:581] duration metric: took 1m12.144633009s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:51:16.937577   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:51:16.941812   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:51:16.941838   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:51:16.941849   45269 node_conditions.go:105] duration metric: took 4.264769ms to run NodePressure ...
	I1128 00:51:16.941859   45269 start.go:228] waiting for startup goroutines ...
	I1128 00:51:16.941865   45269 start.go:233] waiting for cluster config update ...
	I1128 00:51:16.941874   45269 start.go:242] writing updated cluster config ...
	I1128 00:51:16.942150   45269 ssh_runner.go:195] Run: rm -f paused
	I1128 00:51:16.992567   45269 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 00:51:16.994677   45269 out.go:177] 
	W1128 00:51:16.996083   45269 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 00:51:16.997442   45269 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 00:51:16.998644   45269 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-732472" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:44:09 UTC, ends at Tue 2023-11-28 01:00:18 UTC. --
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.772287103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=121b4887-35cb-4074-a9de-b1b6b138c4c8 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.774240868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0bbe6832-acf9-4d50-83ba-2f3a12cb218d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.774848337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133218774823587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=0bbe6832-acf9-4d50-83ba-2f3a12cb218d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.775631067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ab424aec-bd6a-4900-850f-74929ed8a76e name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.775743561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ab424aec-bd6a-4900-850f-74929ed8a76e name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.776071533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ab424aec-bd6a-4900-850f-74929ed8a76e name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.819284517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4cd0bed4-1c60-4f70-9d75-40ae6d7dc63d name=/runtime.v1.RuntimeService/Version
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.819480550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4cd0bed4-1c60-4f70-9d75-40ae6d7dc63d name=/runtime.v1.RuntimeService/Version
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.820723333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6cc7dddc-b588-4cb3-95b8-0525664f0f57 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.821187835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133218821173639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=6cc7dddc-b588-4cb3-95b8-0525664f0f57 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.821738963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=57d12e57-30f0-4c8d-aab4-d815fc45561c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.821819213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=57d12e57-30f0-4c8d-aab4-d815fc45561c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.822001095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=57d12e57-30f0-4c8d-aab4-d815fc45561c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.849596773Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=a5952b5a-e169-4648-944e-703d955dece0 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.849868644Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ef148471e3870002824b796b37f4f030af3565f02a7076cd6ab0f0a5e1fb03e7,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-nd9qp,Uid:de534eb9-4a5c-400d-ba7c-da4bc1bef670,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132607917896538,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-nd9qp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de534eb9-4a5c-400d-ba7c-da4bc1bef670,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:07.566313946Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9f880d43-3a6e-4eed-8f26-1a1ca9bdc6
0e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132607033839400,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-28T00:50:06.684722076Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-fsfpw,Uid:a466ce19-debe-424d-9eec-00513557472b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132605104635324,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.739557485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-5s84s,Uid:4388650c-3956-
44bf-86ea-6b64743166ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132604996764648,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.635768318Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&PodSandboxMetadata{Name:kube-proxy-88chq,Uid:273e27bd-a4a8-4fa9-913a-a67ee5a80990,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132604452608809,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,k8s-app: kube-proxy,pod-
template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.105481854Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-732472,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578316458283,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-11-28T00:49:37.793690103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&
PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-732472,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578304542594,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-11-28T00:49:37.793691239Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-732472,Uid:f3e287dac636cd18fa651d2219ad4ea9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578254103069,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8
s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f3e287dac636cd18fa651d2219ad4ea9,kubernetes.io/config.seen: 2023-11-28T00:49:37.793681603Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-732472,Uid:a4b9e9d536b8786f0dbde3fec6faabba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578249064316,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a4b9e9d536b8786f0dbde3fec6faabba,kubernetes.io/config.seen: 2023-11-28T00:49:37.793688315Z
,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=a5952b5a-e169-4648-944e-703d955dece0 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.850951521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=51e02fd1-a178-417a-9fe2-50a778c9208c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.851037401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=51e02fd1-a178-417a-9fe2-50a778c9208c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.851208558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=51e02fd1-a178-417a-9fe2-50a778c9208c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.862141375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2e588375-1b8e-4b02-bf53-769241a216c5 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.862240192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2e588375-1b8e-4b02-bf53-769241a216c5 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.863803500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c2e497b4-d3da-4de5-8d04-8de776431183 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.864266694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133218864241687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c2e497b4-d3da-4de5-8d04-8de776431183 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.864806315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b5ed35b3-e73d-40c6-9a23-97107b9be99f name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.864884026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b5ed35b3-e73d-40c6-9a23-97107b9be99f name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:00:18 old-k8s-version-732472 crio[710]: time="2023-11-28 01:00:18.865055045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b5ed35b3-e73d-40c6-9a23-97107b9be99f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d3d9279a66ef5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   c499bf98989cc       storage-provisioner
	9b1825f1d0c82       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   c72b8f101411b       kube-proxy-88chq
	a2177833c1771       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   ef320b5118d97       coredns-5644d7b6d9-fsfpw
	b60c716bf8e7d       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   ce0022ee5b25e       coredns-5644d7b6d9-5s84s
	b9e1c4fc0eff6       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   c586e2dd67a2c       etcd-old-k8s-version-732472
	1823a59d40c07       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   a0326b82d62aa       kube-controller-manager-old-k8s-version-732472
	a8ef985c9c8af       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   b81167430dd7c       kube-scheduler-old-k8s-version-732472
	f0bcb5d5d5a7f       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   dd2bae0b694c3       kube-apiserver-old-k8s-version-732472
	
	* 
	* ==> coredns [a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2] <==
	* .:53
	2023-11-28T00:50:06.431Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-28T00:50:06.431Z [INFO] CoreDNS-1.6.2
	2023-11-28T00:50:06.432Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-28T00:50:38.360Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	
	* 
	* ==> coredns [b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e] <==
	* .:53
	2023-11-28T00:50:06.394Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-28T00:50:06.394Z [INFO] CoreDNS-1.6.2
	2023-11-28T00:50:06.394Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-28T00:50:36.221Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-732472
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-732472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=old-k8s-version-732472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_49_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:49:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:59:45 +0000   Tue, 28 Nov 2023 00:49:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:59:45 +0000   Tue, 28 Nov 2023 00:49:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:59:45 +0000   Tue, 28 Nov 2023 00:49:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:59:45 +0000   Tue, 28 Nov 2023 00:49:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    old-k8s-version-732472
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 1581785acd1f4bd3a339cf98671c531d
	 System UUID:                1581785a-cd1f-4bd3-a339-cf98671c531d
	 Boot ID:                    4b090cb9-312f-4acd-958f-f6e962927841
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-5s84s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                coredns-5644d7b6d9-fsfpw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-732472                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                kube-apiserver-old-k8s-version-732472             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                kube-controller-manager-old-k8s-version-732472    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                kube-proxy-88chq                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-732472             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                metrics-server-74d5856cc6-nd9qp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             340Mi (16%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-732472     Node old-k8s-version-732472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-732472     Node old-k8s-version-732472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-732472     Node old-k8s-version-732472 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-732472  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov28 00:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.087466] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.583383] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472544] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147326] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.574552] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.935361] systemd-fstab-generator[634]: Ignoring "noauto" for root device
	[  +0.165813] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.172772] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.132870] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.234892] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[ +19.783735] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
	[  +0.438464] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +15.991683] kauditd_printk_skb: 3 callbacks suppressed
	[Nov28 00:45] kauditd_printk_skb: 2 callbacks suppressed
	[Nov28 00:49] systemd-fstab-generator[3136]: Ignoring "noauto" for root device
	[  +1.432963] kauditd_printk_skb: 6 callbacks suppressed
	[Nov28 00:50] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd] <==
	* 2023-11-28 00:49:40.584582 W | auth: simple token is not cryptographically signed
	2023-11-28 00:49:40.589597 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-28 00:49:40.590813 I | etcdserver: bbf1bb039b0d3451 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-28 00:49:40.591130 I | etcdserver/membership: added member bbf1bb039b0d3451 [https://192.168.39.172:2380] to cluster a5f5c7bb54d744d4
	2023-11-28 00:49:40.592893 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-28 00:49:40.593298 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-28 00:49:40.593451 I | embed: listening for metrics on http://192.168.39.172:2381
	2023-11-28 00:49:41.376444 I | raft: bbf1bb039b0d3451 is starting a new election at term 1
	2023-11-28 00:49:41.376573 I | raft: bbf1bb039b0d3451 became candidate at term 2
	2023-11-28 00:49:41.376587 I | raft: bbf1bb039b0d3451 received MsgVoteResp from bbf1bb039b0d3451 at term 2
	2023-11-28 00:49:41.376597 I | raft: bbf1bb039b0d3451 became leader at term 2
	2023-11-28 00:49:41.376602 I | raft: raft.node: bbf1bb039b0d3451 elected leader bbf1bb039b0d3451 at term 2
	2023-11-28 00:49:41.377075 I | etcdserver: published {Name:old-k8s-version-732472 ClientURLs:[https://192.168.39.172:2379]} to cluster a5f5c7bb54d744d4
	2023-11-28 00:49:41.377136 I | embed: ready to serve client requests
	2023-11-28 00:49:41.378042 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-28 00:49:41.378756 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-28 00:49:41.378837 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-28 00:49:41.379301 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-28 00:49:41.389781 I | embed: ready to serve client requests
	2023-11-28 00:49:41.394100 I | embed: serving client requests on 192.168.39.172:2379
	2023-11-28 00:50:06.205833 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-732472\" " with result "range_response_count:1 size:4370" took too long (204.132171ms) to execute
	2023-11-28 00:50:06.206240 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (141.002534ms) to execute
	2023-11-28 00:50:06.222049 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:466" took too long (162.26639ms) to execute
	2023-11-28 00:59:41.412308 I | mvcc: store.index: compact 661
	2023-11-28 00:59:41.414935 I | mvcc: finished scheduled compaction at 661 (took 2.139467ms)
	
	* 
	* ==> kernel <==
	*  01:00:19 up 16 min,  0 users,  load average: 0.12, 0.19, 0.22
	Linux old-k8s-version-732472 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201] <==
	* I1128 00:53:08.263253       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 00:53:08.263674       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 00:53:08.263852       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:53:08.263882       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:54:45.605522       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 00:54:45.605792       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 00:54:45.605920       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:54:45.605949       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:55:45.606245       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 00:55:45.606418       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 00:55:45.606463       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:55:45.606474       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:57:45.606845       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 00:57:45.606955       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 00:57:45.607036       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:57:45.607048       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:59:45.608936       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 00:59:45.609052       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 00:59:45.609119       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:59:45.609126       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d] <==
	* E1128 00:54:06.488101       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:54:20.657000       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:54:36.740022       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:54:52.659019       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:55:06.992250       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:55:24.660884       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:55:37.244218       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:55:56.663115       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:56:07.496660       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:56:28.665521       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:56:37.748894       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:57:00.667291       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:57:08.001542       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:57:32.669468       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:57:38.253932       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:58:04.671619       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:58:08.506016       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:58:36.673763       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:58:38.757903       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:59:08.676094       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:59:09.009926       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1128 00:59:39.261812       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:59:40.677985       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 01:00:09.513833       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 01:00:12.680326       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506] <==
	* W1128 00:50:07.273928       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1128 00:50:07.290292       1 node.go:135] Successfully retrieved node IP: 192.168.39.172
	I1128 00:50:07.290328       1 server_others.go:149] Using iptables Proxier.
	I1128 00:50:07.292144       1 server.go:529] Version: v1.16.0
	I1128 00:50:07.299044       1 config.go:131] Starting endpoints config controller
	I1128 00:50:07.304229       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1128 00:50:07.307611       1 config.go:313] Starting service config controller
	I1128 00:50:07.307652       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1128 00:50:07.404590       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1128 00:50:07.409694       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022] <==
	* W1128 00:49:44.638960       1 authentication.go:79] Authentication is disabled
	I1128 00:49:44.639002       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1128 00:49:44.639443       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1128 00:49:44.686124       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 00:49:44.694132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 00:49:44.698758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 00:49:44.698859       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 00:49:44.698953       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:49:44.701638       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:49:44.701729       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:49:44.701818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 00:49:44.701893       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 00:49:44.701969       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 00:49:44.702660       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 00:49:45.693266       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 00:49:45.695090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 00:49:45.700609       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 00:49:45.704851       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 00:49:45.705881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:49:45.707337       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:49:45.709837       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:49:45.711477       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 00:49:45.714162       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 00:49:45.715215       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 00:49:45.716216       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:44:09 UTC, ends at Tue 2023-11-28 01:00:19 UTC. --
	Nov 28 00:55:49 old-k8s-version-732472 kubelet[3155]: E1128 00:55:49.837627    3155 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 00:55:49 old-k8s-version-732472 kubelet[3155]: E1128 00:55:49.837756    3155 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 00:55:49 old-k8s-version-732472 kubelet[3155]: E1128 00:55:49.837828    3155 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 00:55:49 old-k8s-version-732472 kubelet[3155]: E1128 00:55:49.837879    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 28 00:56:00 old-k8s-version-732472 kubelet[3155]: E1128 00:56:00.800256    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:56:13 old-k8s-version-732472 kubelet[3155]: E1128 00:56:13.801900    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:56:27 old-k8s-version-732472 kubelet[3155]: E1128 00:56:27.800411    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:56:41 old-k8s-version-732472 kubelet[3155]: E1128 00:56:41.799892    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:56:53 old-k8s-version-732472 kubelet[3155]: E1128 00:56:53.801549    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:57:08 old-k8s-version-732472 kubelet[3155]: E1128 00:57:08.801743    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:57:23 old-k8s-version-732472 kubelet[3155]: E1128 00:57:23.800488    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:57:35 old-k8s-version-732472 kubelet[3155]: E1128 00:57:35.801030    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:57:48 old-k8s-version-732472 kubelet[3155]: E1128 00:57:48.800720    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:58:03 old-k8s-version-732472 kubelet[3155]: E1128 00:58:03.800736    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:58:17 old-k8s-version-732472 kubelet[3155]: E1128 00:58:17.801159    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:58:29 old-k8s-version-732472 kubelet[3155]: E1128 00:58:29.800802    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:58:40 old-k8s-version-732472 kubelet[3155]: E1128 00:58:40.800085    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:58:51 old-k8s-version-732472 kubelet[3155]: E1128 00:58:51.800135    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:59:05 old-k8s-version-732472 kubelet[3155]: E1128 00:59:05.800519    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:59:18 old-k8s-version-732472 kubelet[3155]: E1128 00:59:18.800962    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:59:32 old-k8s-version-732472 kubelet[3155]: E1128 00:59:32.800244    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:59:37 old-k8s-version-732472 kubelet[3155]: E1128 00:59:37.872047    3155 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 28 00:59:46 old-k8s-version-732472 kubelet[3155]: E1128 00:59:46.800680    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:00:00 old-k8s-version-732472 kubelet[3155]: E1128 01:00:00.805038    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:00:14 old-k8s-version-732472 kubelet[3155]: E1128 01:00:14.800095    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329] <==
	* I1128 00:50:07.720180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:50:07.732723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:50:07.732787       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 00:50:07.742963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 00:50:07.743181       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-732472_21f89187-c378-43ba-acbe-0c31444d4fd8!
	I1128 00:50:07.744616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd0a0a8e-b3f1-4694-90e5-0d6d2344bc64", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-732472_21f89187-c378-43ba-acbe-0c31444d4fd8 became leader
	I1128 00:50:07.843538       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-732472_21f89187-c378-43ba-acbe-0c31444d4fd8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-732472 -n old-k8s-version-732472
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-732472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-nd9qp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-732472 describe pod metrics-server-74d5856cc6-nd9qp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-732472 describe pod metrics-server-74d5856cc6-nd9qp: exit status 1 (66.95621ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-nd9qp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-732472 describe pod metrics-server-74d5856cc6-nd9qp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (461.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304541 -n embed-certs-304541
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-28 01:05:21.365192428 +0000 UTC m=+6022.930219065
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-304541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-304541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.58µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-304541 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-304541 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-304541 logs -n 25: (1.23112787s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-304541            | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-001086 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | disable-driver-mounts-001086                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:37 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473615             | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC | 28 Nov 23 00:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-732472             | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-488423  | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-304541                 | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473615                  | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-488423       | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC | 28 Nov 23 00:48 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 01:03 UTC | 28 Nov 23 01:03 UTC |
	| start   | -p newest-cni-517109 --memory=2200 --alsologtostderr   | newest-cni-517109            | jenkins | v1.32.0 | 28 Nov 23 01:03 UTC | 28 Nov 23 01:04 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 01:03 UTC | 28 Nov 23 01:03 UTC |
	| start   | -p auto-167798 --memory=3072                           | auto-167798                  | jenkins | v1.32.0 | 28 Nov 23 01:03 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-517109             | newest-cni-517109            | jenkins | v1.32.0 | 28 Nov 23 01:04 UTC | 28 Nov 23 01:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-517109                                   | newest-cni-517109            | jenkins | v1.32.0 | 28 Nov 23 01:04 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 01:03:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 01:03:29.461689   51138 out.go:296] Setting OutFile to fd 1 ...
	I1128 01:03:29.461948   51138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 01:03:29.461957   51138 out.go:309] Setting ErrFile to fd 2...
	I1128 01:03:29.461962   51138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 01:03:29.462179   51138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 01:03:29.462746   51138 out.go:303] Setting JSON to false
	I1128 01:03:29.463753   51138 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6357,"bootTime":1701127053,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 01:03:29.463816   51138 start.go:138] virtualization: kvm guest
	I1128 01:03:29.466111   51138 out.go:177] * [auto-167798] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 01:03:29.467704   51138 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 01:03:29.467642   51138 notify.go:220] Checking for updates...
	I1128 01:03:29.469221   51138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 01:03:29.470779   51138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 01:03:29.472223   51138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:03:29.473631   51138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 01:03:29.475112   51138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 01:03:29.477093   51138 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:03:29.477227   51138 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:03:29.477322   51138 config.go:182] Loaded profile config "newest-cni-517109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 01:03:29.477402   51138 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 01:03:29.515501   51138 out.go:177] * Using the kvm2 driver based on user configuration
	I1128 01:03:29.516802   51138 start.go:298] selected driver: kvm2
	I1128 01:03:29.516816   51138 start.go:902] validating driver "kvm2" against <nil>
	I1128 01:03:29.516831   51138 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 01:03:29.517551   51138 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 01:03:29.517628   51138 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 01:03:29.533122   51138 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 01:03:29.533175   51138 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 01:03:29.533382   51138 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 01:03:29.533457   51138 cni.go:84] Creating CNI manager for ""
	I1128 01:03:29.533477   51138 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 01:03:29.533499   51138 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1128 01:03:29.533514   51138 start_flags.go:323] config:
	{Name:auto-167798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-167798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 01:03:29.533690   51138 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 01:03:29.535564   51138 out.go:177] * Starting control plane node auto-167798 in cluster auto-167798
	I1128 01:03:24.645707   50808 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1128 01:03:24.645837   50808 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:03:24.645875   50808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:03:24.659848   50808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I1128 01:03:24.660297   50808 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:03:24.660766   50808 main.go:141] libmachine: Using API Version  1
	I1128 01:03:24.660786   50808 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:03:24.661050   50808 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:03:24.661213   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetMachineName
	I1128 01:03:24.661369   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:03:24.661496   50808 start.go:159] libmachine.API.Create for "newest-cni-517109" (driver="kvm2")
	I1128 01:03:24.661527   50808 client.go:168] LocalClient.Create starting
	I1128 01:03:24.661552   50808 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem
	I1128 01:03:24.661587   50808 main.go:141] libmachine: Decoding PEM data...
	I1128 01:03:24.661611   50808 main.go:141] libmachine: Parsing certificate...
	I1128 01:03:24.661670   50808 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem
	I1128 01:03:24.661699   50808 main.go:141] libmachine: Decoding PEM data...
	I1128 01:03:24.661716   50808 main.go:141] libmachine: Parsing certificate...
	I1128 01:03:24.661750   50808 main.go:141] libmachine: Running pre-create checks...
	I1128 01:03:24.661782   50808 main.go:141] libmachine: (newest-cni-517109) Calling .PreCreateCheck
	I1128 01:03:24.662144   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetConfigRaw
	I1128 01:03:24.662575   50808 main.go:141] libmachine: Creating machine...
	I1128 01:03:24.662590   50808 main.go:141] libmachine: (newest-cni-517109) Calling .Create
	I1128 01:03:24.662761   50808 main.go:141] libmachine: (newest-cni-517109) Creating KVM machine...
	I1128 01:03:24.663968   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found existing default KVM network
	I1128 01:03:24.665820   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:24.665661   50831 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147f10}
	I1128 01:03:24.671144   50808 main.go:141] libmachine: (newest-cni-517109) DBG | trying to create private KVM network mk-newest-cni-517109 192.168.39.0/24...
	I1128 01:03:24.747366   50808 main.go:141] libmachine: (newest-cni-517109) DBG | private KVM network mk-newest-cni-517109 192.168.39.0/24 created
	I1128 01:03:24.747432   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:24.747314   50831 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:03:24.747451   50808 main.go:141] libmachine: (newest-cni-517109) Setting up store path in /home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109 ...
	I1128 01:03:24.747477   50808 main.go:141] libmachine: (newest-cni-517109) Building disk image from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso
	I1128 01:03:24.747775   50808 main.go:141] libmachine: (newest-cni-517109) Downloading /home/jenkins/minikube-integration/17206-4749/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso...
	I1128 01:03:24.962069   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:24.961900   50831 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa...
	I1128 01:03:25.296071   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:25.295957   50831 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/newest-cni-517109.rawdisk...
	I1128 01:03:25.296106   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Writing magic tar header
	I1128 01:03:25.296131   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Writing SSH key tar header
	I1128 01:03:25.296204   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:25.296147   50831 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109 ...
	I1128 01:03:25.296285   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109
	I1128 01:03:25.296310   50808 main.go:141] libmachine: (newest-cni-517109) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109 (perms=drwx------)
	I1128 01:03:25.296324   50808 main.go:141] libmachine: (newest-cni-517109) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines (perms=drwxr-xr-x)
	I1128 01:03:25.296333   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines
	I1128 01:03:25.296344   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:03:25.296354   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749
	I1128 01:03:25.296361   50808 main.go:141] libmachine: (newest-cni-517109) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube (perms=drwxr-xr-x)
	I1128 01:03:25.296372   50808 main.go:141] libmachine: (newest-cni-517109) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749 (perms=drwxrwxr-x)
	I1128 01:03:25.296380   50808 main.go:141] libmachine: (newest-cni-517109) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1128 01:03:25.296390   50808 main.go:141] libmachine: (newest-cni-517109) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1128 01:03:25.296396   50808 main.go:141] libmachine: (newest-cni-517109) Creating domain...
	I1128 01:03:25.296405   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1128 01:03:25.296414   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Checking permissions on dir: /home/jenkins
	I1128 01:03:25.296422   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Checking permissions on dir: /home
	I1128 01:03:25.296430   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Skipping /home - not owner
	I1128 01:03:25.297812   50808 main.go:141] libmachine: (newest-cni-517109) define libvirt domain using xml: 
	I1128 01:03:25.297840   50808 main.go:141] libmachine: (newest-cni-517109) <domain type='kvm'>
	I1128 01:03:25.297853   50808 main.go:141] libmachine: (newest-cni-517109)   <name>newest-cni-517109</name>
	I1128 01:03:25.297871   50808 main.go:141] libmachine: (newest-cni-517109)   <memory unit='MiB'>2200</memory>
	I1128 01:03:25.297901   50808 main.go:141] libmachine: (newest-cni-517109)   <vcpu>2</vcpu>
	I1128 01:03:25.297930   50808 main.go:141] libmachine: (newest-cni-517109)   <features>
	I1128 01:03:25.297960   50808 main.go:141] libmachine: (newest-cni-517109)     <acpi/>
	I1128 01:03:25.297985   50808 main.go:141] libmachine: (newest-cni-517109)     <apic/>
	I1128 01:03:25.298006   50808 main.go:141] libmachine: (newest-cni-517109)     <pae/>
	I1128 01:03:25.298020   50808 main.go:141] libmachine: (newest-cni-517109)     
	I1128 01:03:25.298040   50808 main.go:141] libmachine: (newest-cni-517109)   </features>
	I1128 01:03:25.298054   50808 main.go:141] libmachine: (newest-cni-517109)   <cpu mode='host-passthrough'>
	I1128 01:03:25.298075   50808 main.go:141] libmachine: (newest-cni-517109)   
	I1128 01:03:25.298090   50808 main.go:141] libmachine: (newest-cni-517109)   </cpu>
	I1128 01:03:25.298104   50808 main.go:141] libmachine: (newest-cni-517109)   <os>
	I1128 01:03:25.298115   50808 main.go:141] libmachine: (newest-cni-517109)     <type>hvm</type>
	I1128 01:03:25.298130   50808 main.go:141] libmachine: (newest-cni-517109)     <boot dev='cdrom'/>
	I1128 01:03:25.298142   50808 main.go:141] libmachine: (newest-cni-517109)     <boot dev='hd'/>
	I1128 01:03:25.298237   50808 main.go:141] libmachine: (newest-cni-517109)     <bootmenu enable='no'/>
	I1128 01:03:25.298265   50808 main.go:141] libmachine: (newest-cni-517109)   </os>
	I1128 01:03:25.298295   50808 main.go:141] libmachine: (newest-cni-517109)   <devices>
	I1128 01:03:25.298310   50808 main.go:141] libmachine: (newest-cni-517109)     <disk type='file' device='cdrom'>
	I1128 01:03:25.298330   50808 main.go:141] libmachine: (newest-cni-517109)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/boot2docker.iso'/>
	I1128 01:03:25.298360   50808 main.go:141] libmachine: (newest-cni-517109)       <target dev='hdc' bus='scsi'/>
	I1128 01:03:25.298378   50808 main.go:141] libmachine: (newest-cni-517109)       <readonly/>
	I1128 01:03:25.298388   50808 main.go:141] libmachine: (newest-cni-517109)     </disk>
	I1128 01:03:25.298401   50808 main.go:141] libmachine: (newest-cni-517109)     <disk type='file' device='disk'>
	I1128 01:03:25.298418   50808 main.go:141] libmachine: (newest-cni-517109)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1128 01:03:25.298435   50808 main.go:141] libmachine: (newest-cni-517109)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/newest-cni-517109.rawdisk'/>
	I1128 01:03:25.298454   50808 main.go:141] libmachine: (newest-cni-517109)       <target dev='hda' bus='virtio'/>
	I1128 01:03:25.298468   50808 main.go:141] libmachine: (newest-cni-517109)     </disk>
	I1128 01:03:25.298484   50808 main.go:141] libmachine: (newest-cni-517109)     <interface type='network'>
	I1128 01:03:25.298497   50808 main.go:141] libmachine: (newest-cni-517109)       <source network='mk-newest-cni-517109'/>
	I1128 01:03:25.298512   50808 main.go:141] libmachine: (newest-cni-517109)       <model type='virtio'/>
	I1128 01:03:25.298528   50808 main.go:141] libmachine: (newest-cni-517109)     </interface>
	I1128 01:03:25.298546   50808 main.go:141] libmachine: (newest-cni-517109)     <interface type='network'>
	I1128 01:03:25.298559   50808 main.go:141] libmachine: (newest-cni-517109)       <source network='default'/>
	I1128 01:03:25.298573   50808 main.go:141] libmachine: (newest-cni-517109)       <model type='virtio'/>
	I1128 01:03:25.298594   50808 main.go:141] libmachine: (newest-cni-517109)     </interface>
	I1128 01:03:25.298608   50808 main.go:141] libmachine: (newest-cni-517109)     <serial type='pty'>
	I1128 01:03:25.298633   50808 main.go:141] libmachine: (newest-cni-517109)       <target port='0'/>
	I1128 01:03:25.298655   50808 main.go:141] libmachine: (newest-cni-517109)     </serial>
	I1128 01:03:25.298669   50808 main.go:141] libmachine: (newest-cni-517109)     <console type='pty'>
	I1128 01:03:25.298683   50808 main.go:141] libmachine: (newest-cni-517109)       <target type='serial' port='0'/>
	I1128 01:03:25.298694   50808 main.go:141] libmachine: (newest-cni-517109)     </console>
	I1128 01:03:25.298705   50808 main.go:141] libmachine: (newest-cni-517109)     <rng model='virtio'>
	I1128 01:03:25.298729   50808 main.go:141] libmachine: (newest-cni-517109)       <backend model='random'>/dev/random</backend>
	I1128 01:03:25.298742   50808 main.go:141] libmachine: (newest-cni-517109)     </rng>
	I1128 01:03:25.298753   50808 main.go:141] libmachine: (newest-cni-517109)     
	I1128 01:03:25.298765   50808 main.go:141] libmachine: (newest-cni-517109)     
	I1128 01:03:25.298777   50808 main.go:141] libmachine: (newest-cni-517109)   </devices>
	I1128 01:03:25.298790   50808 main.go:141] libmachine: (newest-cni-517109) </domain>
	I1128 01:03:25.298804   50808 main.go:141] libmachine: (newest-cni-517109) 
	I1128 01:03:25.302968   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:c5:53:46 in network default
	I1128 01:03:25.303607   50808 main.go:141] libmachine: (newest-cni-517109) Ensuring networks are active...
	I1128 01:03:25.303623   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:25.304276   50808 main.go:141] libmachine: (newest-cni-517109) Ensuring network default is active
	I1128 01:03:25.304565   50808 main.go:141] libmachine: (newest-cni-517109) Ensuring network mk-newest-cni-517109 is active
	I1128 01:03:25.305088   50808 main.go:141] libmachine: (newest-cni-517109) Getting domain xml...
	I1128 01:03:25.305769   50808 main.go:141] libmachine: (newest-cni-517109) Creating domain...
	I1128 01:03:26.662752   50808 main.go:141] libmachine: (newest-cni-517109) Waiting to get IP...
	I1128 01:03:26.663518   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:26.663985   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:26.664103   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:26.664019   50831 retry.go:31] will retry after 298.978151ms: waiting for machine to come up
	I1128 01:03:26.964686   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:26.965224   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:26.965249   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:26.965168   50831 retry.go:31] will retry after 360.128952ms: waiting for machine to come up
	I1128 01:03:27.326521   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:27.327004   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:27.327036   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:27.326952   50831 retry.go:31] will retry after 439.636471ms: waiting for machine to come up
	I1128 01:03:27.768429   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:27.768920   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:27.768952   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:27.768901   50831 retry.go:31] will retry after 435.603513ms: waiting for machine to come up
	I1128 01:03:28.206508   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:28.206938   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:28.206987   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:28.206899   50831 retry.go:31] will retry after 636.040983ms: waiting for machine to come up
	I1128 01:03:29.159848   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:29.160665   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:29.160697   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:29.160624   50831 retry.go:31] will retry after 653.663499ms: waiting for machine to come up
	I1128 01:03:29.536886   51138 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 01:03:29.536924   51138 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 01:03:29.536936   51138 cache.go:56] Caching tarball of preloaded images
	I1128 01:03:29.537019   51138 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 01:03:29.537031   51138 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 01:03:29.537141   51138 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/config.json ...
	I1128 01:03:29.537167   51138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/config.json: {Name:mk4777179a62778dccd7c6121e0965612d6b787e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:03:29.537322   51138 start.go:365] acquiring machines lock for auto-167798: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 01:03:29.815835   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:29.816295   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:29.816324   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:29.816243   50831 retry.go:31] will retry after 879.035715ms: waiting for machine to come up
	I1128 01:03:30.696424   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:30.696832   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:30.696867   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:30.696797   50831 retry.go:31] will retry after 1.044593627s: waiting for machine to come up
	I1128 01:03:31.743063   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:31.743526   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:31.743560   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:31.743476   50831 retry.go:31] will retry after 1.265921236s: waiting for machine to come up
	I1128 01:03:33.010957   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:33.011493   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:33.011518   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:33.011440   50831 retry.go:31] will retry after 1.941211908s: waiting for machine to come up
	I1128 01:03:34.954391   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:34.954856   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:34.954885   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:34.954809   50831 retry.go:31] will retry after 2.735077617s: waiting for machine to come up
	I1128 01:03:37.693628   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:37.694068   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:37.694086   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:37.694047   50831 retry.go:31] will retry after 2.67120702s: waiting for machine to come up
	I1128 01:03:40.366841   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:40.367241   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:40.367281   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:40.367198   50831 retry.go:31] will retry after 3.557737786s: waiting for machine to come up
	I1128 01:03:43.928784   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:43.929317   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find current IP address of domain newest-cni-517109 in network mk-newest-cni-517109
	I1128 01:03:43.929353   50808 main.go:141] libmachine: (newest-cni-517109) DBG | I1128 01:03:43.929262   50831 retry.go:31] will retry after 5.24611437s: waiting for machine to come up
	I1128 01:03:49.179967   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.180373   50808 main.go:141] libmachine: (newest-cni-517109) Found IP for machine: 192.168.39.231
	I1128 01:03:49.180396   50808 main.go:141] libmachine: (newest-cni-517109) Reserving static IP address...
	I1128 01:03:49.180421   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has current primary IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.180829   50808 main.go:141] libmachine: (newest-cni-517109) DBG | unable to find host DHCP lease matching {name: "newest-cni-517109", mac: "52:54:00:af:04:eb", ip: "192.168.39.231"} in network mk-newest-cni-517109
	I1128 01:03:49.256147   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Getting to WaitForSSH function...
	I1128 01:03:49.256178   50808 main.go:141] libmachine: (newest-cni-517109) Reserved static IP address: 192.168.39.231
	I1128 01:03:49.256194   50808 main.go:141] libmachine: (newest-cni-517109) Waiting for SSH to be available...
	I1128 01:03:49.258753   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.259140   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:49.259173   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.259364   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Using SSH client type: external
	I1128 01:03:49.259394   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa (-rw-------)
	I1128 01:03:49.259443   50808 main.go:141] libmachine: (newest-cni-517109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 01:03:49.259463   50808 main.go:141] libmachine: (newest-cni-517109) DBG | About to run SSH command:
	I1128 01:03:49.259474   50808 main.go:141] libmachine: (newest-cni-517109) DBG | exit 0
	I1128 01:03:49.360624   50808 main.go:141] libmachine: (newest-cni-517109) DBG | SSH cmd err, output: <nil>: 
	I1128 01:03:49.360949   50808 main.go:141] libmachine: (newest-cni-517109) KVM machine creation complete!
	I1128 01:03:49.361293   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetConfigRaw
	I1128 01:03:49.361803   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:03:49.362003   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:03:49.362188   50808 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1128 01:03:49.362206   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetState
	I1128 01:03:49.363373   50808 main.go:141] libmachine: Detecting operating system of created instance...
	I1128 01:03:49.363402   50808 main.go:141] libmachine: Waiting for SSH to be available...
	I1128 01:03:49.363411   50808 main.go:141] libmachine: Getting to WaitForSSH function...
	I1128 01:03:49.363426   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:49.365549   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.365899   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:49.365924   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.366058   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:49.366245   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:49.366384   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:49.366508   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:49.366662   50808 main.go:141] libmachine: Using SSH client type: native
	I1128 01:03:49.367012   50808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1128 01:03:49.367029   50808 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1128 01:03:49.500167   50808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 01:03:49.500194   50808 main.go:141] libmachine: Detecting the provisioner...
	I1128 01:03:49.500212   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:49.503093   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.503534   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:49.503566   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.503696   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:49.503917   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:49.504077   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:49.504233   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:49.504425   50808 main.go:141] libmachine: Using SSH client type: native
	I1128 01:03:49.504729   50808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1128 01:03:49.504741   50808 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1128 01:03:50.873692   51138 start.go:369] acquired machines lock for "auto-167798" in 21.336335308s
	I1128 01:03:50.873761   51138 start.go:93] Provisioning new machine with config: &{Name:auto-167798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:auto-167798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 01:03:50.873879   51138 start.go:125] createHost starting for "" (driver="kvm2")
	I1128 01:03:49.637719   50808 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g8be4f20-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1128 01:03:49.637795   50808 main.go:141] libmachine: found compatible host: buildroot
	I1128 01:03:49.637813   50808 main.go:141] libmachine: Provisioning with buildroot...
	I1128 01:03:49.637826   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetMachineName
	I1128 01:03:49.638107   50808 buildroot.go:166] provisioning hostname "newest-cni-517109"
	I1128 01:03:49.638136   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetMachineName
	I1128 01:03:49.638336   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:49.641062   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.641475   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:49.641515   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.641677   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:49.641831   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:49.641992   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:49.642166   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:49.642362   50808 main.go:141] libmachine: Using SSH client type: native
	I1128 01:03:49.642824   50808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1128 01:03:49.642844   50808 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-517109 && echo "newest-cni-517109" | sudo tee /etc/hostname
	I1128 01:03:49.789225   50808 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-517109
	
	I1128 01:03:49.789257   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:49.791943   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.792242   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:49.792276   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.792395   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:49.792615   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:49.792811   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:49.792958   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:49.793153   50808 main.go:141] libmachine: Using SSH client type: native
	I1128 01:03:49.793627   50808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1128 01:03:49.793656   50808 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-517109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-517109/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-517109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 01:03:49.932275   50808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 01:03:49.932300   50808 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 01:03:49.932348   50808 buildroot.go:174] setting up certificates
	I1128 01:03:49.932366   50808 provision.go:83] configureAuth start
	I1128 01:03:49.932383   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetMachineName
	I1128 01:03:49.932690   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetIP
	I1128 01:03:49.935554   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.935904   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:49.935939   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.936189   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:49.938440   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.938753   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:49.938781   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:49.938902   50808 provision.go:138] copyHostCerts
	I1128 01:03:49.938963   50808 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 01:03:49.938973   50808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 01:03:49.939053   50808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 01:03:49.939150   50808 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 01:03:49.939158   50808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 01:03:49.939181   50808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 01:03:49.939250   50808 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 01:03:49.939260   50808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 01:03:49.939280   50808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 01:03:49.939334   50808 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.newest-cni-517109 san=[192.168.39.231 192.168.39.231 localhost 127.0.0.1 minikube newest-cni-517109]
	I1128 01:03:50.065852   50808 provision.go:172] copyRemoteCerts
	I1128 01:03:50.065914   50808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 01:03:50.065949   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:50.068739   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.069161   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.069186   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.069410   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:50.069622   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:50.069795   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:50.069980   50808 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa Username:docker}
	I1128 01:03:50.166972   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 01:03:50.189272   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 01:03:50.212141   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 01:03:50.235007   50808 provision.go:86] duration metric: configureAuth took 302.626443ms
	I1128 01:03:50.235035   50808 buildroot.go:189] setting minikube options for container-runtime
	I1128 01:03:50.235252   50808 config.go:182] Loaded profile config "newest-cni-517109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 01:03:50.235326   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:50.238012   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.238367   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.238391   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.238564   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:50.238775   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:50.238942   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:50.239070   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:50.239324   50808 main.go:141] libmachine: Using SSH client type: native
	I1128 01:03:50.239658   50808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1128 01:03:50.239674   50808 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 01:03:50.594315   50808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 01:03:50.594347   50808 main.go:141] libmachine: Checking connection to Docker...
	I1128 01:03:50.594360   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetURL
	I1128 01:03:50.595522   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Using libvirt version 6000000
	I1128 01:03:50.597829   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.598157   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.598218   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.598398   50808 main.go:141] libmachine: Docker is up and running!
	I1128 01:03:50.598414   50808 main.go:141] libmachine: Reticulating splines...
	I1128 01:03:50.598422   50808 client.go:171] LocalClient.Create took 25.936885214s
	I1128 01:03:50.598448   50808 start.go:167] duration metric: libmachine.API.Create for "newest-cni-517109" took 25.93695361s
	I1128 01:03:50.598461   50808 start.go:300] post-start starting for "newest-cni-517109" (driver="kvm2")
	I1128 01:03:50.598478   50808 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 01:03:50.598511   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:03:50.598774   50808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 01:03:50.598799   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:50.601257   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.601693   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.601722   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.601830   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:50.602029   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:50.602171   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:50.602306   50808 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa Username:docker}
	I1128 01:03:50.697500   50808 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 01:03:50.701643   50808 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 01:03:50.701668   50808 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 01:03:50.701720   50808 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 01:03:50.701787   50808 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 01:03:50.701880   50808 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 01:03:50.709571   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 01:03:50.733086   50808 start.go:303] post-start completed in 134.610555ms
	I1128 01:03:50.733134   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetConfigRaw
	I1128 01:03:50.733729   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetIP
	I1128 01:03:50.736386   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.736737   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.736785   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.737079   50808 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/config.json ...
	I1128 01:03:50.737286   50808 start.go:128] duration metric: createHost completed in 26.093250061s
	I1128 01:03:50.737308   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:50.739511   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.739886   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.739914   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.740047   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:50.740212   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:50.740377   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:50.740535   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:50.740706   50808 main.go:141] libmachine: Using SSH client type: native
	I1128 01:03:50.741021   50808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I1128 01:03:50.741034   50808 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 01:03:50.873546   50808 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701133430.856257898
	
	I1128 01:03:50.873571   50808 fix.go:206] guest clock: 1701133430.856257898
	I1128 01:03:50.873580   50808 fix.go:219] Guest: 2023-11-28 01:03:50.856257898 +0000 UTC Remote: 2023-11-28 01:03:50.737298778 +0000 UTC m=+26.215198758 (delta=118.95912ms)
	I1128 01:03:50.873603   50808 fix.go:190] guest clock delta is within tolerance: 118.95912ms
	I1128 01:03:50.873610   50808 start.go:83] releasing machines lock for "newest-cni-517109", held for 26.229656415s
	I1128 01:03:50.873648   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:03:50.873969   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetIP
	I1128 01:03:50.876862   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.877336   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.877408   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.877429   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:03:50.877943   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:03:50.878129   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:03:50.878246   50808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 01:03:50.878300   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:50.878398   50808 ssh_runner.go:195] Run: cat /version.json
	I1128 01:03:50.878422   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:03:50.881185   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.881534   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.881714   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.881741   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.881849   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:50.881965   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:50.881988   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:50.882053   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:50.882175   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:03:50.882246   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:50.882304   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:03:50.882360   50808 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa Username:docker}
	I1128 01:03:50.882420   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:03:50.882564   50808 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa Username:docker}
	I1128 01:03:51.003506   50808 ssh_runner.go:195] Run: systemctl --version
	I1128 01:03:51.009983   50808 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 01:03:51.180970   50808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 01:03:51.188133   50808 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 01:03:51.188194   50808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 01:03:51.206635   50808 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 01:03:51.206657   50808 start.go:472] detecting cgroup driver to use...
	I1128 01:03:51.206723   50808 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 01:03:51.219272   50808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 01:03:51.230803   50808 docker.go:203] disabling cri-docker service (if available) ...
	I1128 01:03:51.230867   50808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 01:03:51.242258   50808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 01:03:51.253951   50808 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 01:03:51.359788   50808 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 01:03:51.492465   50808 docker.go:219] disabling docker service ...
	I1128 01:03:51.492527   50808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 01:03:51.508977   50808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 01:03:51.522401   50808 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 01:03:51.643643   50808 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 01:03:51.750481   50808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 01:03:51.764314   50808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 01:03:51.783873   50808 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 01:03:51.783927   50808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:03:51.794763   50808 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 01:03:51.794831   50808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:03:51.805228   50808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:03:51.815145   50808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:03:51.825611   50808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 01:03:51.836846   50808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 01:03:51.845938   50808 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 01:03:51.845983   50808 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 01:03:51.859361   50808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 01:03:51.868393   50808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 01:03:51.982368   50808 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 01:03:52.181019   50808 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 01:03:52.181098   50808 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 01:03:52.190400   50808 start.go:540] Will wait 60s for crictl version
	I1128 01:03:52.190455   50808 ssh_runner.go:195] Run: which crictl
	I1128 01:03:52.194776   50808 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 01:03:52.241505   50808 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 01:03:52.241600   50808 ssh_runner.go:195] Run: crio --version
	I1128 01:03:52.288886   50808 ssh_runner.go:195] Run: crio --version
	I1128 01:03:52.341843   50808 out.go:177] * Preparing Kubernetes v1.29.0-rc.0 on CRI-O 1.24.1 ...
	I1128 01:03:52.343362   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetIP
	I1128 01:03:52.346417   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:52.346923   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:03:52.346962   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:03:52.347219   50808 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 01:03:52.351850   50808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 01:03:52.365877   50808 localpath.go:92] copying /home/jenkins/minikube-integration/17206-4749/.minikube/client.crt -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/client.crt
	I1128 01:03:52.366004   50808 localpath.go:117] copying /home/jenkins/minikube-integration/17206-4749/.minikube/client.key -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/client.key
	I1128 01:03:52.367900   50808 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1128 01:03:50.877286   51138 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1128 01:03:50.877500   51138 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:03:50.877544   51138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:03:50.893674   51138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I1128 01:03:50.894084   51138 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:03:50.894601   51138 main.go:141] libmachine: Using API Version  1
	I1128 01:03:50.894629   51138 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:03:50.894984   51138 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:03:50.895188   51138 main.go:141] libmachine: (auto-167798) Calling .GetMachineName
	I1128 01:03:50.895349   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:03:50.895554   51138 start.go:159] libmachine.API.Create for "auto-167798" (driver="kvm2")
	I1128 01:03:50.895582   51138 client.go:168] LocalClient.Create starting
	I1128 01:03:50.895616   51138 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem
	I1128 01:03:50.895659   51138 main.go:141] libmachine: Decoding PEM data...
	I1128 01:03:50.895678   51138 main.go:141] libmachine: Parsing certificate...
	I1128 01:03:50.895755   51138 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem
	I1128 01:03:50.895781   51138 main.go:141] libmachine: Decoding PEM data...
	I1128 01:03:50.895801   51138 main.go:141] libmachine: Parsing certificate...
	I1128 01:03:50.895827   51138 main.go:141] libmachine: Running pre-create checks...
	I1128 01:03:50.895840   51138 main.go:141] libmachine: (auto-167798) Calling .PreCreateCheck
	I1128 01:03:50.896162   51138 main.go:141] libmachine: (auto-167798) Calling .GetConfigRaw
	I1128 01:03:50.896611   51138 main.go:141] libmachine: Creating machine...
	I1128 01:03:50.896624   51138 main.go:141] libmachine: (auto-167798) Calling .Create
	I1128 01:03:50.896767   51138 main.go:141] libmachine: (auto-167798) Creating KVM machine...
	I1128 01:03:50.897775   51138 main.go:141] libmachine: (auto-167798) DBG | found existing default KVM network
	I1128 01:03:50.899106   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:50.898951   51272 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:d8:c6} reservation:<nil>}
	I1128 01:03:50.899888   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:50.899804   51272 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c6:d6:b8} reservation:<nil>}
	I1128 01:03:50.901342   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:50.901262   51272 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5040}
	I1128 01:03:50.907353   51138 main.go:141] libmachine: (auto-167798) DBG | trying to create private KVM network mk-auto-167798 192.168.61.0/24...
	I1128 01:03:50.985733   51138 main.go:141] libmachine: (auto-167798) DBG | private KVM network mk-auto-167798 192.168.61.0/24 created
	I1128 01:03:50.985763   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:50.985703   51272 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:03:50.985778   51138 main.go:141] libmachine: (auto-167798) Setting up store path in /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798 ...
	I1128 01:03:50.985797   51138 main.go:141] libmachine: (auto-167798) Building disk image from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso
	I1128 01:03:50.985904   51138 main.go:141] libmachine: (auto-167798) Downloading /home/jenkins/minikube-integration/17206-4749/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso...
	I1128 01:03:51.205498   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:51.205350   51272 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa...
	I1128 01:03:51.469680   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:51.469552   51272 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/auto-167798.rawdisk...
	I1128 01:03:51.469708   51138 main.go:141] libmachine: (auto-167798) DBG | Writing magic tar header
	I1128 01:03:51.469721   51138 main.go:141] libmachine: (auto-167798) DBG | Writing SSH key tar header
	I1128 01:03:51.469736   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:51.469685   51272 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798 ...
	I1128 01:03:51.469833   51138 main.go:141] libmachine: (auto-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798
	I1128 01:03:51.469886   51138 main.go:141] libmachine: (auto-167798) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798 (perms=drwx------)
	I1128 01:03:51.469903   51138 main.go:141] libmachine: (auto-167798) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines (perms=drwxr-xr-x)
	I1128 01:03:51.469916   51138 main.go:141] libmachine: (auto-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines
	I1128 01:03:51.469947   51138 main.go:141] libmachine: (auto-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:03:51.469962   51138 main.go:141] libmachine: (auto-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749
	I1128 01:03:51.469975   51138 main.go:141] libmachine: (auto-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1128 01:03:51.469989   51138 main.go:141] libmachine: (auto-167798) DBG | Checking permissions on dir: /home/jenkins
	I1128 01:03:51.470001   51138 main.go:141] libmachine: (auto-167798) DBG | Checking permissions on dir: /home
	I1128 01:03:51.470014   51138 main.go:141] libmachine: (auto-167798) DBG | Skipping /home - not owner
	I1128 01:03:51.470096   51138 main.go:141] libmachine: (auto-167798) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube (perms=drwxr-xr-x)
	I1128 01:03:51.470144   51138 main.go:141] libmachine: (auto-167798) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749 (perms=drwxrwxr-x)
	I1128 01:03:51.470166   51138 main.go:141] libmachine: (auto-167798) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1128 01:03:51.470179   51138 main.go:141] libmachine: (auto-167798) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1128 01:03:51.470202   51138 main.go:141] libmachine: (auto-167798) Creating domain...
	I1128 01:03:51.471200   51138 main.go:141] libmachine: (auto-167798) define libvirt domain using xml: 
	I1128 01:03:51.471218   51138 main.go:141] libmachine: (auto-167798) <domain type='kvm'>
	I1128 01:03:51.471231   51138 main.go:141] libmachine: (auto-167798)   <name>auto-167798</name>
	I1128 01:03:51.471239   51138 main.go:141] libmachine: (auto-167798)   <memory unit='MiB'>3072</memory>
	I1128 01:03:51.471249   51138 main.go:141] libmachine: (auto-167798)   <vcpu>2</vcpu>
	I1128 01:03:51.471261   51138 main.go:141] libmachine: (auto-167798)   <features>
	I1128 01:03:51.471269   51138 main.go:141] libmachine: (auto-167798)     <acpi/>
	I1128 01:03:51.471279   51138 main.go:141] libmachine: (auto-167798)     <apic/>
	I1128 01:03:51.471307   51138 main.go:141] libmachine: (auto-167798)     <pae/>
	I1128 01:03:51.471331   51138 main.go:141] libmachine: (auto-167798)     
	I1128 01:03:51.471345   51138 main.go:141] libmachine: (auto-167798)   </features>
	I1128 01:03:51.471358   51138 main.go:141] libmachine: (auto-167798)   <cpu mode='host-passthrough'>
	I1128 01:03:51.471371   51138 main.go:141] libmachine: (auto-167798)   
	I1128 01:03:51.471388   51138 main.go:141] libmachine: (auto-167798)   </cpu>
	I1128 01:03:51.471400   51138 main.go:141] libmachine: (auto-167798)   <os>
	I1128 01:03:51.471413   51138 main.go:141] libmachine: (auto-167798)     <type>hvm</type>
	I1128 01:03:51.471431   51138 main.go:141] libmachine: (auto-167798)     <boot dev='cdrom'/>
	I1128 01:03:51.471442   51138 main.go:141] libmachine: (auto-167798)     <boot dev='hd'/>
	I1128 01:03:51.471456   51138 main.go:141] libmachine: (auto-167798)     <bootmenu enable='no'/>
	I1128 01:03:51.471471   51138 main.go:141] libmachine: (auto-167798)   </os>
	I1128 01:03:51.471503   51138 main.go:141] libmachine: (auto-167798)   <devices>
	I1128 01:03:51.471529   51138 main.go:141] libmachine: (auto-167798)     <disk type='file' device='cdrom'>
	I1128 01:03:51.471549   51138 main.go:141] libmachine: (auto-167798)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/boot2docker.iso'/>
	I1128 01:03:51.471562   51138 main.go:141] libmachine: (auto-167798)       <target dev='hdc' bus='scsi'/>
	I1128 01:03:51.471582   51138 main.go:141] libmachine: (auto-167798)       <readonly/>
	I1128 01:03:51.471594   51138 main.go:141] libmachine: (auto-167798)     </disk>
	I1128 01:03:51.471620   51138 main.go:141] libmachine: (auto-167798)     <disk type='file' device='disk'>
	I1128 01:03:51.471656   51138 main.go:141] libmachine: (auto-167798)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1128 01:03:51.471673   51138 main.go:141] libmachine: (auto-167798)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/auto-167798.rawdisk'/>
	I1128 01:03:51.471685   51138 main.go:141] libmachine: (auto-167798)       <target dev='hda' bus='virtio'/>
	I1128 01:03:51.471697   51138 main.go:141] libmachine: (auto-167798)     </disk>
	I1128 01:03:51.471707   51138 main.go:141] libmachine: (auto-167798)     <interface type='network'>
	I1128 01:03:51.471718   51138 main.go:141] libmachine: (auto-167798)       <source network='mk-auto-167798'/>
	I1128 01:03:51.471731   51138 main.go:141] libmachine: (auto-167798)       <model type='virtio'/>
	I1128 01:03:51.471745   51138 main.go:141] libmachine: (auto-167798)     </interface>
	I1128 01:03:51.471757   51138 main.go:141] libmachine: (auto-167798)     <interface type='network'>
	I1128 01:03:51.471771   51138 main.go:141] libmachine: (auto-167798)       <source network='default'/>
	I1128 01:03:51.471783   51138 main.go:141] libmachine: (auto-167798)       <model type='virtio'/>
	I1128 01:03:51.471795   51138 main.go:141] libmachine: (auto-167798)     </interface>
	I1128 01:03:51.471807   51138 main.go:141] libmachine: (auto-167798)     <serial type='pty'>
	I1128 01:03:51.471818   51138 main.go:141] libmachine: (auto-167798)       <target port='0'/>
	I1128 01:03:51.471839   51138 main.go:141] libmachine: (auto-167798)     </serial>
	I1128 01:03:51.471862   51138 main.go:141] libmachine: (auto-167798)     <console type='pty'>
	I1128 01:03:51.471890   51138 main.go:141] libmachine: (auto-167798)       <target type='serial' port='0'/>
	I1128 01:03:51.471904   51138 main.go:141] libmachine: (auto-167798)     </console>
	I1128 01:03:51.471917   51138 main.go:141] libmachine: (auto-167798)     <rng model='virtio'>
	I1128 01:03:51.471929   51138 main.go:141] libmachine: (auto-167798)       <backend model='random'>/dev/random</backend>
	I1128 01:03:51.471939   51138 main.go:141] libmachine: (auto-167798)     </rng>
	I1128 01:03:51.471971   51138 main.go:141] libmachine: (auto-167798)     
	I1128 01:03:51.471994   51138 main.go:141] libmachine: (auto-167798)     
	I1128 01:03:51.472009   51138 main.go:141] libmachine: (auto-167798)   </devices>
	I1128 01:03:51.472021   51138 main.go:141] libmachine: (auto-167798) </domain>
	I1128 01:03:51.472033   51138 main.go:141] libmachine: (auto-167798) 
	I1128 01:03:51.476486   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:d5:bb:ec in network default
	I1128 01:03:51.477116   51138 main.go:141] libmachine: (auto-167798) Ensuring networks are active...
	I1128 01:03:51.477148   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:51.477911   51138 main.go:141] libmachine: (auto-167798) Ensuring network default is active
	I1128 01:03:51.478296   51138 main.go:141] libmachine: (auto-167798) Ensuring network mk-auto-167798 is active
	I1128 01:03:51.478914   51138 main.go:141] libmachine: (auto-167798) Getting domain xml...
	I1128 01:03:51.479703   51138 main.go:141] libmachine: (auto-167798) Creating domain...
	I1128 01:03:52.852471   51138 main.go:141] libmachine: (auto-167798) Waiting to get IP...
	I1128 01:03:52.853676   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:52.854207   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:52.854262   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:52.854180   51272 retry.go:31] will retry after 287.053602ms: waiting for machine to come up
	I1128 01:03:53.142497   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:53.143174   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:53.143221   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:53.143095   51272 retry.go:31] will retry after 263.706143ms: waiting for machine to come up
	I1128 01:03:53.408770   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:53.409379   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:53.409402   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:53.409302   51272 retry.go:31] will retry after 474.069719ms: waiting for machine to come up
	I1128 01:03:53.884637   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:53.885199   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:53.885252   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:53.885161   51272 retry.go:31] will retry after 475.556856ms: waiting for machine to come up
	I1128 01:03:54.362363   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:54.362803   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:54.362829   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:54.362764   51272 retry.go:31] will retry after 550.95018ms: waiting for machine to come up
	I1128 01:03:52.369316   50808 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 01:03:52.369400   50808 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 01:03:52.406510   50808 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.0". assuming images are not preloaded.
	I1128 01:03:52.406570   50808 ssh_runner.go:195] Run: which lz4
	I1128 01:03:52.410839   50808 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 01:03:52.415176   50808 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 01:03:52.415210   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401669218 bytes)
	I1128 01:03:54.274163   50808 crio.go:444] Took 1.863372 seconds to copy over tarball
	I1128 01:03:54.274232   50808 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 01:03:54.915693   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:54.916245   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:54.916272   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:54.916176   51272 retry.go:31] will retry after 916.964258ms: waiting for machine to come up
	I1128 01:03:55.835164   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:55.835717   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:55.835744   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:55.835680   51272 retry.go:31] will retry after 883.390745ms: waiting for machine to come up
	I1128 01:03:56.720796   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:56.721315   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:56.721345   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:56.721258   51272 retry.go:31] will retry after 1.045987041s: waiting for machine to come up
	I1128 01:03:57.769060   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:57.769605   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:57.769630   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:57.769566   51272 retry.go:31] will retry after 1.126067288s: waiting for machine to come up
	I1128 01:03:58.897048   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:03:58.897582   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:03:58.897614   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:03:58.897537   51272 retry.go:31] will retry after 2.262244349s: waiting for machine to come up
	I1128 01:03:57.239904   50808 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.965647059s)
	I1128 01:03:57.239933   50808 crio.go:451] Took 2.965744 seconds to extract the tarball
	I1128 01:03:57.239944   50808 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 01:03:57.280949   50808 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 01:03:57.357674   50808 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 01:03:57.357699   50808 cache_images.go:84] Images are preloaded, skipping loading
	I1128 01:03:57.357835   50808 ssh_runner.go:195] Run: crio config
	I1128 01:03:57.413844   50808 cni.go:84] Creating CNI manager for ""
	I1128 01:03:57.413870   50808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 01:03:57.413892   50808 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1128 01:03:57.413919   50808 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.231 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-517109 NodeName:newest-cni-517109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.231"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.231 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 01:03:57.414092   50808 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.231
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-517109"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.231
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.231"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 01:03:57.414184   50808 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-517109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.0 ClusterName:newest-cni-517109 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 01:03:57.414264   50808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.0
	I1128 01:03:57.424807   50808 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 01:03:57.424886   50808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 01:03:57.434690   50808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I1128 01:03:57.451544   50808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1128 01:03:57.468400   50808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1128 01:03:57.485183   50808 ssh_runner.go:195] Run: grep 192.168.39.231	control-plane.minikube.internal$ /etc/hosts
	I1128 01:03:57.489245   50808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.231	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 01:03:57.500894   50808 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109 for IP: 192.168.39.231
	I1128 01:03:57.500927   50808 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:03:57.501122   50808 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 01:03:57.501181   50808 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 01:03:57.501404   50808 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/client.key
	I1128 01:03:57.501455   50808 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.key.cabadef2
	I1128 01:03:57.501472   50808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.crt.cabadef2 with IP's: [192.168.39.231 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 01:03:57.643939   50808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.crt.cabadef2 ...
	I1128 01:03:57.643979   50808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.crt.cabadef2: {Name:mk9aa9201348c8cf760c0d7eff4fe9a492921c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:03:57.644168   50808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.key.cabadef2 ...
	I1128 01:03:57.644185   50808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.key.cabadef2: {Name:mkb6ad3be7847d2c01f6330bcb76b4b413ae5b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:03:57.644267   50808 certs.go:337] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.crt.cabadef2 -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.crt
	I1128 01:03:57.644323   50808 certs.go:341] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.key.cabadef2 -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.key
	I1128 01:03:57.644376   50808 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/proxy-client.key
	I1128 01:03:57.644389   50808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/proxy-client.crt with IP's: []
	I1128 01:03:57.857580   50808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/proxy-client.crt ...
	I1128 01:03:57.857616   50808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/proxy-client.crt: {Name:mk0a1f6cdf0a861a8482cca4957e922b39c814d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:03:57.857767   50808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/proxy-client.key ...
	I1128 01:03:57.857781   50808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/proxy-client.key: {Name:mk38162433a37e7a57e7e9e5633cd05a95563167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:03:57.857961   50808 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 01:03:57.858012   50808 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 01:03:57.858029   50808 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 01:03:57.858069   50808 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 01:03:57.858107   50808 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 01:03:57.858140   50808 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 01:03:57.858197   50808 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 01:03:57.858785   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 01:03:57.886745   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 01:03:57.911880   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 01:03:57.937552   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 01:03:57.963950   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 01:03:57.989606   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 01:03:58.013875   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 01:03:58.038325   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 01:03:58.063060   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 01:03:58.086402   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 01:03:58.112795   50808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 01:03:58.135781   50808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 01:03:58.155776   50808 ssh_runner.go:195] Run: openssl version
	I1128 01:03:58.163525   50808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 01:03:58.175208   50808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:03:58.180107   50808 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:03:58.180169   50808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:03:58.186422   50808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 01:03:58.199006   50808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 01:03:58.210863   50808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 01:03:58.216806   50808 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 01:03:58.216868   50808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 01:03:58.224548   50808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 01:03:58.237790   50808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 01:03:58.251890   50808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 01:03:58.257773   50808 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 01:03:58.257830   50808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 01:03:58.264309   50808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 01:03:58.274640   50808 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 01:03:58.279149   50808 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 01:03:58.279205   50808 kubeadm.go:404] StartCluster: {Name:newest-cni-517109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.0 ClusterName:newest-cni-517109 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 01:03:58.279277   50808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 01:03:58.279344   50808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 01:03:58.318675   50808 cri.go:89] found id: ""
	I1128 01:03:58.318745   50808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 01:03:58.329213   50808 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 01:03:58.338330   50808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 01:03:58.347583   50808 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 01:03:58.347641   50808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 01:03:58.735664   50808 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 01:04:01.161056   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:01.161472   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:04:01.161501   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:04:01.161417   51272 retry.go:31] will retry after 2.01957098s: waiting for machine to come up
	I1128 01:04:03.182520   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:03.183084   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:04:03.183108   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:04:03.183037   51272 retry.go:31] will retry after 3.589463996s: waiting for machine to come up
	I1128 01:04:06.773974   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:06.774389   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:04:06.774418   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:04:06.774344   51272 retry.go:31] will retry after 3.171653547s: waiting for machine to come up
	I1128 01:04:12.228645   50808 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 01:04:12.228737   50808 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 01:04:12.228852   50808 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 01:04:12.228993   50808 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 01:04:12.229120   50808 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 01:04:12.229208   50808 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 01:04:12.231003   50808 out.go:204]   - Generating certificates and keys ...
	I1128 01:04:12.231077   50808 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 01:04:12.231142   50808 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 01:04:12.231223   50808 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 01:04:12.231279   50808 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 01:04:12.231347   50808 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 01:04:12.231417   50808 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 01:04:12.231494   50808 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 01:04:12.231663   50808 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-517109] and IPs [192.168.39.231 127.0.0.1 ::1]
	I1128 01:04:12.231743   50808 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 01:04:12.231883   50808 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-517109] and IPs [192.168.39.231 127.0.0.1 ::1]
	I1128 01:04:12.231950   50808 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 01:04:12.232006   50808 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 01:04:12.232043   50808 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 01:04:12.232102   50808 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 01:04:12.232169   50808 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 01:04:12.232246   50808 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 01:04:12.232291   50808 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 01:04:12.232343   50808 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 01:04:12.232388   50808 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 01:04:12.232454   50808 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 01:04:12.232534   50808 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 01:04:12.234440   50808 out.go:204]   - Booting up control plane ...
	I1128 01:04:12.234540   50808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 01:04:12.234634   50808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 01:04:12.234723   50808 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 01:04:12.234856   50808 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 01:04:12.234954   50808 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 01:04:12.235014   50808 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 01:04:12.235155   50808 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 01:04:12.235260   50808 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004823 seconds
	I1128 01:04:12.235380   50808 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 01:04:12.235547   50808 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 01:04:12.235646   50808 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 01:04:12.235887   50808 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-517109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 01:04:12.235987   50808 kubeadm.go:322] [bootstrap-token] Using token: xtvjgo.2u3cowl47pm643qq
	I1128 01:04:12.237548   50808 out.go:204]   - Configuring RBAC rules ...
	I1128 01:04:12.237677   50808 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 01:04:12.237794   50808 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 01:04:12.237956   50808 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 01:04:12.238098   50808 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 01:04:12.238239   50808 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 01:04:12.238368   50808 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 01:04:12.238539   50808 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 01:04:12.238586   50808 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 01:04:12.238625   50808 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 01:04:12.238634   50808 kubeadm.go:322] 
	I1128 01:04:12.238686   50808 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 01:04:12.238692   50808 kubeadm.go:322] 
	I1128 01:04:12.238753   50808 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 01:04:12.238758   50808 kubeadm.go:322] 
	I1128 01:04:12.238793   50808 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 01:04:12.238884   50808 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 01:04:12.238952   50808 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 01:04:12.238963   50808 kubeadm.go:322] 
	I1128 01:04:12.239055   50808 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 01:04:12.239066   50808 kubeadm.go:322] 
	I1128 01:04:12.239133   50808 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 01:04:12.239144   50808 kubeadm.go:322] 
	I1128 01:04:12.239208   50808 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 01:04:12.239295   50808 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 01:04:12.239390   50808 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 01:04:12.239403   50808 kubeadm.go:322] 
	I1128 01:04:12.239507   50808 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 01:04:12.239612   50808 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 01:04:12.239624   50808 kubeadm.go:322] 
	I1128 01:04:12.239719   50808 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xtvjgo.2u3cowl47pm643qq \
	I1128 01:04:12.239836   50808 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 01:04:12.239881   50808 kubeadm.go:322] 	--control-plane 
	I1128 01:04:12.239895   50808 kubeadm.go:322] 
	I1128 01:04:12.239979   50808 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 01:04:12.239985   50808 kubeadm.go:322] 
	I1128 01:04:12.240057   50808 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xtvjgo.2u3cowl47pm643qq \
	I1128 01:04:12.240203   50808 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 01:04:12.240215   50808 cni.go:84] Creating CNI manager for ""
	I1128 01:04:12.240221   50808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 01:04:12.241932   50808 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 01:04:09.947352   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:09.947886   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find current IP address of domain auto-167798 in network mk-auto-167798
	I1128 01:04:09.947913   51138 main.go:141] libmachine: (auto-167798) DBG | I1128 01:04:09.947833   51272 retry.go:31] will retry after 5.57835767s: waiting for machine to come up
	I1128 01:04:12.243254   50808 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 01:04:12.270096   50808 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 01:04:12.328300   50808 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 01:04:12.328395   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:12.328395   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=newest-cni-517109 minikube.k8s.io/updated_at=2023_11_28T01_04_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:12.413061   50808 ops.go:34] apiserver oom_adj: -16
	I1128 01:04:12.684363   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:12.823076   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:13.420942   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:13.921163   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:14.420843   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:15.527727   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:15.528202   51138 main.go:141] libmachine: (auto-167798) Found IP for machine: 192.168.61.116
	I1128 01:04:15.528233   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has current primary IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:15.528247   51138 main.go:141] libmachine: (auto-167798) Reserving static IP address...
	I1128 01:04:15.528582   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find host DHCP lease matching {name: "auto-167798", mac: "52:54:00:58:0f:21", ip: "192.168.61.116"} in network mk-auto-167798
	I1128 01:04:15.603862   51138 main.go:141] libmachine: (auto-167798) DBG | Getting to WaitForSSH function...
	I1128 01:04:15.603920   51138 main.go:141] libmachine: (auto-167798) Reserved static IP address: 192.168.61.116
	I1128 01:04:15.603949   51138 main.go:141] libmachine: (auto-167798) Waiting for SSH to be available...
	I1128 01:04:15.606611   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:15.606926   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798
	I1128 01:04:15.606954   51138 main.go:141] libmachine: (auto-167798) DBG | unable to find defined IP address of network mk-auto-167798 interface with MAC address 52:54:00:58:0f:21
	I1128 01:04:15.607097   51138 main.go:141] libmachine: (auto-167798) DBG | Using SSH client type: external
	I1128 01:04:15.607142   51138 main.go:141] libmachine: (auto-167798) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa (-rw-------)
	I1128 01:04:15.607179   51138 main.go:141] libmachine: (auto-167798) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 01:04:15.607196   51138 main.go:141] libmachine: (auto-167798) DBG | About to run SSH command:
	I1128 01:04:15.607212   51138 main.go:141] libmachine: (auto-167798) DBG | exit 0
	I1128 01:04:15.610834   51138 main.go:141] libmachine: (auto-167798) DBG | SSH cmd err, output: exit status 255: 
	I1128 01:04:15.610855   51138 main.go:141] libmachine: (auto-167798) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1128 01:04:15.610867   51138 main.go:141] libmachine: (auto-167798) DBG | command : exit 0
	I1128 01:04:15.610873   51138 main.go:141] libmachine: (auto-167798) DBG | err     : exit status 255
	I1128 01:04:15.610881   51138 main.go:141] libmachine: (auto-167798) DBG | output  : 
	I1128 01:04:18.613372   51138 main.go:141] libmachine: (auto-167798) DBG | Getting to WaitForSSH function...
	I1128 01:04:18.616366   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:18.616909   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:18.616945   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:18.617225   51138 main.go:141] libmachine: (auto-167798) DBG | Using SSH client type: external
	I1128 01:04:18.617253   51138 main.go:141] libmachine: (auto-167798) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa (-rw-------)
	I1128 01:04:18.617288   51138 main.go:141] libmachine: (auto-167798) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 01:04:18.617302   51138 main.go:141] libmachine: (auto-167798) DBG | About to run SSH command:
	I1128 01:04:18.617318   51138 main.go:141] libmachine: (auto-167798) DBG | exit 0
	I1128 01:04:18.717163   51138 main.go:141] libmachine: (auto-167798) DBG | SSH cmd err, output: <nil>: 
	I1128 01:04:18.717438   51138 main.go:141] libmachine: (auto-167798) KVM machine creation complete!
	I1128 01:04:18.717820   51138 main.go:141] libmachine: (auto-167798) Calling .GetConfigRaw
	I1128 01:04:18.718396   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:18.718632   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:18.718843   51138 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1128 01:04:18.718860   51138 main.go:141] libmachine: (auto-167798) Calling .GetState
	I1128 01:04:18.720335   51138 main.go:141] libmachine: Detecting operating system of created instance...
	I1128 01:04:18.720355   51138 main.go:141] libmachine: Waiting for SSH to be available...
	I1128 01:04:18.720365   51138 main.go:141] libmachine: Getting to WaitForSSH function...
	I1128 01:04:18.720375   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:18.723450   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:18.723802   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:18.723828   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:18.724011   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:18.724184   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:18.724380   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:18.724538   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:18.724703   51138 main.go:141] libmachine: Using SSH client type: native
	I1128 01:04:18.725065   51138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I1128 01:04:18.725080   51138 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1128 01:04:18.856187   51138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 01:04:18.856208   51138 main.go:141] libmachine: Detecting the provisioner...
	I1128 01:04:18.856216   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:18.859508   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:18.859920   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:18.859975   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:18.860132   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:18.860339   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:18.860524   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:18.860767   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:18.860986   51138 main.go:141] libmachine: Using SSH client type: native
	I1128 01:04:18.861302   51138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I1128 01:04:18.861314   51138 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1128 01:04:18.989690   51138 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g8be4f20-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1128 01:04:18.989803   51138 main.go:141] libmachine: found compatible host: buildroot
	I1128 01:04:18.989822   51138 main.go:141] libmachine: Provisioning with buildroot...
	I1128 01:04:18.989833   51138 main.go:141] libmachine: (auto-167798) Calling .GetMachineName
	I1128 01:04:18.990137   51138 buildroot.go:166] provisioning hostname "auto-167798"
	I1128 01:04:18.990162   51138 main.go:141] libmachine: (auto-167798) Calling .GetMachineName
	I1128 01:04:18.990368   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:18.993377   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:18.993850   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:18.993880   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:18.994035   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:18.994214   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:18.994414   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:18.994572   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:18.994761   51138 main.go:141] libmachine: Using SSH client type: native
	I1128 01:04:18.995215   51138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I1128 01:04:18.995237   51138 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-167798 && echo "auto-167798" | sudo tee /etc/hostname
	I1128 01:04:19.139027   51138 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-167798
	
	I1128 01:04:19.139060   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:19.142057   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.142481   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:19.142514   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.142720   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:19.142902   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:19.143081   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:19.143245   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:19.143443   51138 main.go:141] libmachine: Using SSH client type: native
	I1128 01:04:19.143825   51138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I1128 01:04:19.143855   51138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-167798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-167798/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-167798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 01:04:19.282249   51138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 01:04:19.282281   51138 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 01:04:19.282329   51138 buildroot.go:174] setting up certificates
	I1128 01:04:19.282343   51138 provision.go:83] configureAuth start
	I1128 01:04:19.282362   51138 main.go:141] libmachine: (auto-167798) Calling .GetMachineName
	I1128 01:04:19.282668   51138 main.go:141] libmachine: (auto-167798) Calling .GetIP
	I1128 01:04:19.285577   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.286006   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:19.286037   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.286228   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:19.288877   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.289282   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:19.289318   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.289400   51138 provision.go:138] copyHostCerts
	I1128 01:04:19.289465   51138 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 01:04:19.289476   51138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 01:04:19.289526   51138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 01:04:19.289614   51138 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 01:04:19.289622   51138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 01:04:19.289644   51138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 01:04:19.289768   51138 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 01:04:19.289783   51138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 01:04:19.289812   51138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 01:04:19.289883   51138 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.auto-167798 san=[192.168.61.116 192.168.61.116 localhost 127.0.0.1 minikube auto-167798]
	I1128 01:04:14.920836   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:15.420876   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:15.921092   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:16.420652   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:16.921059   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:17.420892   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:17.920956   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:18.420887   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:18.920387   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:19.420745   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:19.475137   51138 provision.go:172] copyRemoteCerts
	I1128 01:04:19.475202   51138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 01:04:19.475229   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:19.478085   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.478432   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:19.478466   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.478691   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:19.478898   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:19.479102   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:19.479256   51138 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa Username:docker}
	I1128 01:04:19.575482   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 01:04:19.599373   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1128 01:04:19.623430   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 01:04:19.647976   51138 provision.go:86] duration metric: configureAuth took 365.614427ms
	I1128 01:04:19.648009   51138 buildroot.go:189] setting minikube options for container-runtime
	I1128 01:04:19.648191   51138 config.go:182] Loaded profile config "auto-167798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:04:19.648257   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:19.651006   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.651441   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:19.651467   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.651660   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:19.651873   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:19.652008   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:19.652136   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:19.652281   51138 main.go:141] libmachine: Using SSH client type: native
	I1128 01:04:19.652743   51138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I1128 01:04:19.652786   51138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 01:04:19.989380   51138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 01:04:19.989418   51138 main.go:141] libmachine: Checking connection to Docker...
	I1128 01:04:19.989429   51138 main.go:141] libmachine: (auto-167798) Calling .GetURL
	I1128 01:04:19.990957   51138 main.go:141] libmachine: (auto-167798) DBG | Using libvirt version 6000000
	I1128 01:04:19.993422   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.993797   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:19.993826   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.994047   51138 main.go:141] libmachine: Docker is up and running!
	I1128 01:04:19.994063   51138 main.go:141] libmachine: Reticulating splines...
	I1128 01:04:19.994078   51138 client.go:171] LocalClient.Create took 29.098481358s
	I1128 01:04:19.994096   51138 start.go:167] duration metric: libmachine.API.Create for "auto-167798" took 29.098546081s
	I1128 01:04:19.994105   51138 start.go:300] post-start starting for "auto-167798" (driver="kvm2")
	I1128 01:04:19.994114   51138 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 01:04:19.994128   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:19.994474   51138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 01:04:19.994508   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:19.997019   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.997320   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:19.997348   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:19.997562   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:19.997757   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:19.997979   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:19.998144   51138 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa Username:docker}
	I1128 01:04:20.095319   51138 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 01:04:20.100438   51138 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 01:04:20.100471   51138 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 01:04:20.100535   51138 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 01:04:20.100648   51138 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 01:04:20.100748   51138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 01:04:20.112125   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 01:04:20.136735   51138 start.go:303] post-start completed in 142.616288ms
	I1128 01:04:20.136804   51138 main.go:141] libmachine: (auto-167798) Calling .GetConfigRaw
	I1128 01:04:20.137414   51138 main.go:141] libmachine: (auto-167798) Calling .GetIP
	I1128 01:04:20.140199   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.140623   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:20.140652   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.140928   51138 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/config.json ...
	I1128 01:04:20.141117   51138 start.go:128] duration metric: createHost completed in 29.267227858s
	I1128 01:04:20.141139   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:20.143434   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.143814   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:20.143840   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.144020   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:20.144226   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:20.144398   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:20.144598   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:20.144840   51138 main.go:141] libmachine: Using SSH client type: native
	I1128 01:04:20.145135   51138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I1128 01:04:20.145146   51138 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 01:04:20.277896   51138 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701133460.264087892
	
	I1128 01:04:20.277918   51138 fix.go:206] guest clock: 1701133460.264087892
	I1128 01:04:20.277928   51138 fix.go:219] Guest: 2023-11-28 01:04:20.264087892 +0000 UTC Remote: 2023-11-28 01:04:20.141127937 +0000 UTC m=+50.728137686 (delta=122.959955ms)
	I1128 01:04:20.277974   51138 fix.go:190] guest clock delta is within tolerance: 122.959955ms
	I1128 01:04:20.277984   51138 start.go:83] releasing machines lock for "auto-167798", held for 29.404258463s
	I1128 01:04:20.278014   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:20.278295   51138 main.go:141] libmachine: (auto-167798) Calling .GetIP
	I1128 01:04:20.281563   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.282015   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:20.282056   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.282317   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:20.282900   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:20.283128   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:20.283224   51138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 01:04:20.283282   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:20.283358   51138 ssh_runner.go:195] Run: cat /version.json
	I1128 01:04:20.283381   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:20.286464   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.286492   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.286853   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:20.286891   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:20.286912   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.286931   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:20.287105   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:20.287206   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:20.287308   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:20.287363   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:20.287473   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:20.287563   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:20.287632   51138 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa Username:docker}
	I1128 01:04:20.287743   51138 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa Username:docker}
	I1128 01:04:20.400555   51138 ssh_runner.go:195] Run: systemctl --version
	I1128 01:04:20.406422   51138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 01:04:20.569863   51138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 01:04:20.576803   51138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 01:04:20.576899   51138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 01:04:20.593331   51138 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 01:04:20.593355   51138 start.go:472] detecting cgroup driver to use...
	I1128 01:04:20.593425   51138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 01:04:20.608307   51138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 01:04:20.620744   51138 docker.go:203] disabling cri-docker service (if available) ...
	I1128 01:04:20.620821   51138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 01:04:20.633260   51138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 01:04:20.646610   51138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 01:04:20.760086   51138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 01:04:20.890122   51138 docker.go:219] disabling docker service ...
	I1128 01:04:20.890199   51138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 01:04:20.905924   51138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 01:04:20.920440   51138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 01:04:21.032883   51138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 01:04:21.168740   51138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 01:04:21.184919   51138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 01:04:21.204499   51138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 01:04:21.204597   51138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:04:21.216242   51138 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 01:04:21.216316   51138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:04:21.227516   51138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:04:21.239579   51138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:04:21.252314   51138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 01:04:21.264576   51138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 01:04:21.275937   51138 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 01:04:21.275994   51138 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 01:04:21.291704   51138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 01:04:21.302160   51138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 01:04:21.407004   51138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 01:04:21.603202   51138 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 01:04:21.603304   51138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 01:04:21.608882   51138 start.go:540] Will wait 60s for crictl version
	I1128 01:04:21.608954   51138 ssh_runner.go:195] Run: which crictl
	I1128 01:04:21.613721   51138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 01:04:21.657333   51138 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 01:04:21.657431   51138 ssh_runner.go:195] Run: crio --version
	I1128 01:04:21.712941   51138 ssh_runner.go:195] Run: crio --version
	I1128 01:04:21.769338   51138 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 01:04:21.771039   51138 main.go:141] libmachine: (auto-167798) Calling .GetIP
	I1128 01:04:21.773902   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:21.774343   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:21.774373   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:21.774653   51138 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1128 01:04:21.779400   51138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 01:04:21.792101   51138 localpath.go:92] copying /home/jenkins/minikube-integration/17206-4749/.minikube/client.crt -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt
	I1128 01:04:21.792267   51138 localpath.go:117] copying /home/jenkins/minikube-integration/17206-4749/.minikube/client.key -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.key
	I1128 01:04:21.792406   51138 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 01:04:21.792474   51138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 01:04:21.828301   51138 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 01:04:21.828367   51138 ssh_runner.go:195] Run: which lz4
	I1128 01:04:21.832525   51138 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 01:04:21.836815   51138 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 01:04:21.836850   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 01:04:23.657612   51138 crio.go:444] Took 1.825131 seconds to copy over tarball
	I1128 01:04:23.657698   51138 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 01:04:19.920402   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:20.421425   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:20.920512   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:21.420872   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:21.920542   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:22.421291   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:22.921020   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:23.420906   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:23.921399   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:24.420914   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:24.920610   50808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:25.087594   50808 kubeadm.go:1081] duration metric: took 12.759283494s to wait for elevateKubeSystemPrivileges.
	I1128 01:04:25.087632   50808 kubeadm.go:406] StartCluster complete in 26.808432253s
	I1128 01:04:25.087653   50808 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:25.087737   50808 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 01:04:25.089369   50808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:25.089635   50808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 01:04:25.089763   50808 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 01:04:25.089841   50808 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-517109"
	I1128 01:04:25.089864   50808 config.go:182] Loaded profile config "newest-cni-517109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 01:04:25.089876   50808 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-517109"
	I1128 01:04:25.089915   50808 addons.go:69] Setting default-storageclass=true in profile "newest-cni-517109"
	I1128 01:04:25.089934   50808 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-517109"
	I1128 01:04:25.089937   50808 host.go:66] Checking if "newest-cni-517109" exists ...
	I1128 01:04:25.090238   50808 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:25.090250   50808 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:25.090269   50808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:25.090278   50808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:25.106943   50808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I1128 01:04:25.107429   50808 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:25.107963   50808 main.go:141] libmachine: Using API Version  1
	I1128 01:04:25.107986   50808 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:25.108342   50808 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:25.108548   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetState
	I1128 01:04:25.109047   50808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I1128 01:04:25.109634   50808 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:25.110179   50808 main.go:141] libmachine: Using API Version  1
	I1128 01:04:25.110202   50808 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:25.110575   50808 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:25.111162   50808 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:25.111188   50808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:25.112158   50808 addons.go:231] Setting addon default-storageclass=true in "newest-cni-517109"
	I1128 01:04:25.112199   50808 host.go:66] Checking if "newest-cni-517109" exists ...
	I1128 01:04:25.112559   50808 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:25.112606   50808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:25.124553   50808 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-517109" context rescaled to 1 replicas
	I1128 01:04:25.124662   50808 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.231 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 01:04:25.126488   50808 out.go:177] * Verifying Kubernetes components...
	I1128 01:04:25.128132   50808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 01:04:25.130255   50808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I1128 01:04:25.130502   50808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I1128 01:04:25.130685   50808 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:25.131184   50808 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:25.131203   50808 main.go:141] libmachine: Using API Version  1
	I1128 01:04:25.131221   50808 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:25.131573   50808 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:25.131893   50808 main.go:141] libmachine: Using API Version  1
	I1128 01:04:25.131913   50808 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:25.132207   50808 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:25.132222   50808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:25.132676   50808 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:25.132906   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetState
	I1128 01:04:25.134749   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:04:25.136344   50808 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 01:04:27.029063   51138 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.371336483s)
	I1128 01:04:27.029109   51138 crio.go:451] Took 3.371466 seconds to extract the tarball
	I1128 01:04:27.029118   51138 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 01:04:27.080422   51138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 01:04:27.166120   51138 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 01:04:27.166145   51138 cache_images.go:84] Images are preloaded, skipping loading
	I1128 01:04:27.166209   51138 ssh_runner.go:195] Run: crio config
	I1128 01:04:27.228914   51138 cni.go:84] Creating CNI manager for ""
	I1128 01:04:27.228942   51138 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 01:04:27.228965   51138 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 01:04:27.228989   51138 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-167798 NodeName:auto-167798 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 01:04:27.229153   51138 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-167798"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 01:04:27.229284   51138 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-167798 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-167798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 01:04:27.229352   51138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 01:04:27.239313   51138 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 01:04:27.239374   51138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 01:04:27.248624   51138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1128 01:04:27.265672   51138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 01:04:27.281412   51138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I1128 01:04:27.299019   51138 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I1128 01:04:27.303296   51138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 01:04:27.315333   51138 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798 for IP: 192.168.61.116
	I1128 01:04:27.315366   51138 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:27.315506   51138 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 01:04:27.315559   51138 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 01:04:27.315676   51138 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.key
	I1128 01:04:27.315701   51138 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.key.6062d48b
	I1128 01:04:27.315711   51138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.crt.6062d48b with IP's: [192.168.61.116 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 01:04:27.372381   51138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.crt.6062d48b ...
	I1128 01:04:27.372413   51138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.crt.6062d48b: {Name:mked8646780ccdbb4e402c17ac5965b80bd7e15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:27.372574   51138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.key.6062d48b ...
	I1128 01:04:27.372589   51138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.key.6062d48b: {Name:mka01a924cab498d23b2b0a1dec5728a29c39845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:27.372657   51138 certs.go:337] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.crt.6062d48b -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.crt
	I1128 01:04:27.372714   51138 certs.go:341] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.key.6062d48b -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.key
	I1128 01:04:27.372813   51138 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/proxy-client.key
	I1128 01:04:27.372837   51138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/proxy-client.crt with IP's: []
	I1128 01:04:27.670267   51138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/proxy-client.crt ...
	I1128 01:04:27.670294   51138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/proxy-client.crt: {Name:mk6e6b4d5babe693147da68fb76b51b73439b5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:27.670445   51138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/proxy-client.key ...
	I1128 01:04:27.670455   51138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/proxy-client.key: {Name:mk886f0e333e54ae7a6b7079d1fb3efbea30647e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:27.670619   51138 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 01:04:27.670653   51138 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 01:04:27.670660   51138 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 01:04:27.670680   51138 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 01:04:27.670705   51138 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 01:04:27.670732   51138 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 01:04:27.670777   51138 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 01:04:27.671341   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 01:04:27.696942   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 01:04:27.766910   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 01:04:27.815771   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 01:04:27.840868   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 01:04:27.863699   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 01:04:27.886158   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 01:04:27.908481   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 01:04:27.932026   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 01:04:27.958693   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 01:04:27.983036   51138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 01:04:28.007797   51138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 01:04:28.026234   51138 ssh_runner.go:195] Run: openssl version
	I1128 01:04:28.032586   51138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 01:04:28.043681   51138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:04:28.048841   51138 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:04:28.048903   51138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:04:28.054805   51138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 01:04:28.066221   51138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 01:04:28.076959   51138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 01:04:28.082234   51138 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 01:04:28.082300   51138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 01:04:28.088209   51138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 01:04:28.098913   51138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 01:04:28.110213   51138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 01:04:28.116102   51138 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 01:04:28.116179   51138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 01:04:28.122377   51138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 01:04:28.136189   51138 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 01:04:28.141461   51138 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 01:04:28.141527   51138 kubeadm.go:404] StartCluster: {Name:auto-167798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:auto-167798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 01:04:28.141604   51138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 01:04:28.141659   51138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 01:04:28.196550   51138 cri.go:89] found id: ""
	I1128 01:04:28.196629   51138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 01:04:28.207886   51138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 01:04:28.219026   51138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 01:04:28.229131   51138 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 01:04:28.229184   51138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 01:04:28.284258   51138 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 01:04:28.284443   51138 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 01:04:28.443200   51138 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 01:04:28.443362   51138 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 01:04:28.443502   51138 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 01:04:28.714690   51138 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 01:04:25.137990   50808 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 01:04:25.138007   50808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 01:04:25.138025   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:04:25.141233   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:04:25.141731   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:04:25.141754   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:04:25.141790   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:04:25.141949   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:04:25.142100   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:04:25.142223   50808 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa Username:docker}
	I1128 01:04:25.155977   50808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I1128 01:04:25.156574   50808 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:25.157137   50808 main.go:141] libmachine: Using API Version  1
	I1128 01:04:25.157165   50808 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:25.157523   50808 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:25.157695   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetState
	I1128 01:04:25.159464   50808 main.go:141] libmachine: (newest-cni-517109) Calling .DriverName
	I1128 01:04:25.159719   50808 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 01:04:25.159733   50808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 01:04:25.159746   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHHostname
	I1128 01:04:25.162688   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:04:25.163056   50808 main.go:141] libmachine: (newest-cni-517109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:eb", ip: ""} in network mk-newest-cni-517109: {Iface:virbr1 ExpiryTime:2023-11-28 02:03:41 +0000 UTC Type:0 Mac:52:54:00:af:04:eb Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:newest-cni-517109 Clientid:01:52:54:00:af:04:eb}
	I1128 01:04:25.163082   50808 main.go:141] libmachine: (newest-cni-517109) DBG | domain newest-cni-517109 has defined IP address 192.168.39.231 and MAC address 52:54:00:af:04:eb in network mk-newest-cni-517109
	I1128 01:04:25.163307   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHPort
	I1128 01:04:25.163458   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHKeyPath
	I1128 01:04:25.163563   50808 main.go:141] libmachine: (newest-cni-517109) Calling .GetSSHUsername
	I1128 01:04:25.163671   50808 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/newest-cni-517109/id_rsa Username:docker}
	I1128 01:04:25.294652   50808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 01:04:25.295688   50808 api_server.go:52] waiting for apiserver process to appear ...
	I1128 01:04:25.295747   50808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 01:04:25.321789   50808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 01:04:25.366840   50808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 01:04:26.206227   50808 api_server.go:72] duration metric: took 1.081518162s to wait for apiserver process to appear ...
	I1128 01:04:26.206253   50808 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 01:04:26.206262   50808 api_server.go:88] waiting for apiserver healthz status ...
	I1128 01:04:26.206302   50808 api_server.go:253] Checking apiserver healthz at https://192.168.39.231:8443/healthz ...
	I1128 01:04:26.527240   50808 api_server.go:279] https://192.168.39.231:8443/healthz returned 200:
	ok
	I1128 01:04:26.531749   50808 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 01:04:26.531778   50808 api_server.go:131] duration metric: took 325.491623ms to wait for apiserver health ...
	I1128 01:04:26.531789   50808 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 01:04:27.074410   50808 system_pods.go:59] 7 kube-system pods found
	I1128 01:04:27.074450   50808 system_pods.go:61] "coredns-76f75df574-brfk2" [03860695-4a49-401b-b584-79dadc32869a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 01:04:27.074461   50808 system_pods.go:61] "coredns-76f75df574-mbc8m" [9196fe20-e621-4bb9-9284-0eac1729a129] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 01:04:27.074470   50808 system_pods.go:61] "etcd-newest-cni-517109" [01827a9b-8650-45ba-b945-2919c43cc1f2] Running
	I1128 01:04:27.074480   50808 system_pods.go:61] "kube-apiserver-newest-cni-517109" [4f9a527f-4930-4481-af46-9c6e08ea4825] Running
	I1128 01:04:27.074487   50808 system_pods.go:61] "kube-controller-manager-newest-cni-517109" [1448b06b-c77a-4b42-a91f-14b11619c8b8] Running
	I1128 01:04:27.074496   50808 system_pods.go:61] "kube-proxy-9q8q5" [e6bb713c-2615-4c31-b693-b243f9e63547] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 01:04:27.074502   50808 system_pods.go:61] "kube-scheduler-newest-cni-517109" [57a48e78-a14e-40de-9b97-9cb0d41f327f] Running
	I1128 01:04:27.074512   50808 system_pods.go:74] duration metric: took 542.716652ms to wait for pod list to return data ...
	I1128 01:04:27.074521   50808 default_sa.go:34] waiting for default service account to be created ...
	I1128 01:04:28.280720   50808 default_sa.go:45] found service account: "default"
	I1128 01:04:28.280767   50808 default_sa.go:55] duration metric: took 1.206219674s for default service account to be created ...
	I1128 01:04:28.280783   50808 kubeadm.go:581] duration metric: took 3.156080367s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1128 01:04:28.280798   50808 node_conditions.go:102] verifying NodePressure condition ...
	I1128 01:04:28.290987   50808 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 01:04:28.291042   50808 node_conditions.go:123] node cpu capacity is 2
	I1128 01:04:28.291059   50808 node_conditions.go:105] duration metric: took 10.255402ms to run NodePressure ...
	I1128 01:04:28.291073   50808 start.go:228] waiting for startup goroutines ...
	I1128 01:04:29.062807   50808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.740968973s)
	I1128 01:04:29.062836   50808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.695961494s)
	I1128 01:04:29.062865   50808 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:29.062870   50808 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:29.062879   50808 main.go:141] libmachine: (newest-cni-517109) Calling .Close
	I1128 01:04:29.062883   50808 main.go:141] libmachine: (newest-cni-517109) Calling .Close
	I1128 01:04:29.063168   50808 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:29.063187   50808 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:29.063196   50808 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:29.063208   50808 main.go:141] libmachine: (newest-cni-517109) Calling .Close
	I1128 01:04:29.063280   50808 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:29.063288   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Closing plugin on server side
	I1128 01:04:29.063299   50808 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:29.063310   50808 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:29.063320   50808 main.go:141] libmachine: (newest-cni-517109) Calling .Close
	I1128 01:04:29.063506   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Closing plugin on server side
	I1128 01:04:29.063542   50808 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:29.063560   50808 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:29.063595   50808 main.go:141] libmachine: (newest-cni-517109) DBG | Closing plugin on server side
	I1128 01:04:29.063622   50808 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:29.063633   50808 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:29.073921   50808 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:29.073947   50808 main.go:141] libmachine: (newest-cni-517109) Calling .Close
	I1128 01:04:29.074209   50808 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:29.074227   50808 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:29.077015   50808 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1128 01:04:29.078478   50808 addons.go:502] enable addons completed in 3.988714222s: enabled=[storage-provisioner default-storageclass]
	I1128 01:04:29.078520   50808 start.go:233] waiting for cluster config update ...
	I1128 01:04:29.078534   50808 start.go:242] writing updated cluster config ...
	I1128 01:04:29.078807   50808 ssh_runner.go:195] Run: rm -f paused
	I1128 01:04:29.157481   50808 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 01:04:29.159499   50808 out.go:177] * Done! kubectl is now configured to use "newest-cni-517109" cluster and "default" namespace by default
	I1128 01:04:28.890514   51138 out.go:204]   - Generating certificates and keys ...
	I1128 01:04:28.890712   51138 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 01:04:28.890821   51138 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 01:04:28.890943   51138 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 01:04:29.479958   51138 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 01:04:29.603210   51138 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 01:04:29.658105   51138 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 01:04:29.845384   51138 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 01:04:29.845565   51138 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-167798 localhost] and IPs [192.168.61.116 127.0.0.1 ::1]
	I1128 01:04:30.124053   51138 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 01:04:30.124295   51138 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-167798 localhost] and IPs [192.168.61.116 127.0.0.1 ::1]
	I1128 01:04:30.268001   51138 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 01:04:30.441703   51138 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 01:04:30.645354   51138 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 01:04:30.645823   51138 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 01:04:30.723895   51138 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 01:04:30.977199   51138 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 01:04:31.255946   51138 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 01:04:31.438011   51138 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 01:04:31.438839   51138 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 01:04:31.441316   51138 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 01:04:31.443904   51138 out.go:204]   - Booting up control plane ...
	I1128 01:04:31.444071   51138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 01:04:31.444171   51138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 01:04:31.444272   51138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 01:04:31.459395   51138 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 01:04:31.460452   51138 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 01:04:31.460547   51138 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 01:04:31.596706   51138 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 01:04:39.097592   51138 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.501882 seconds
	I1128 01:04:39.097721   51138 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 01:04:39.121531   51138 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 01:04:39.654889   51138 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 01:04:39.655138   51138 kubeadm.go:322] [mark-control-plane] Marking the node auto-167798 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 01:04:40.178644   51138 kubeadm.go:322] [bootstrap-token] Using token: ndoz7w.e3z39298mwqv8bxs
	I1128 01:04:40.180259   51138 out.go:204]   - Configuring RBAC rules ...
	I1128 01:04:40.180386   51138 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 01:04:40.186324   51138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 01:04:40.196879   51138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 01:04:40.201649   51138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 01:04:40.210573   51138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 01:04:40.216036   51138 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 01:04:40.236182   51138 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 01:04:40.494380   51138 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 01:04:40.593334   51138 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 01:04:40.593359   51138 kubeadm.go:322] 
	I1128 01:04:40.593438   51138 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 01:04:40.593450   51138 kubeadm.go:322] 
	I1128 01:04:40.593550   51138 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 01:04:40.593561   51138 kubeadm.go:322] 
	I1128 01:04:40.593596   51138 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 01:04:40.593703   51138 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 01:04:40.593788   51138 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 01:04:40.593799   51138 kubeadm.go:322] 
	I1128 01:04:40.593872   51138 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 01:04:40.593883   51138 kubeadm.go:322] 
	I1128 01:04:40.593965   51138 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 01:04:40.593976   51138 kubeadm.go:322] 
	I1128 01:04:40.594047   51138 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 01:04:40.594153   51138 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 01:04:40.594237   51138 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 01:04:40.594248   51138 kubeadm.go:322] 
	I1128 01:04:40.594353   51138 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 01:04:40.594455   51138 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 01:04:40.594491   51138 kubeadm.go:322] 
	I1128 01:04:40.594590   51138 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ndoz7w.e3z39298mwqv8bxs \
	I1128 01:04:40.594686   51138 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 01:04:40.594711   51138 kubeadm.go:322] 	--control-plane 
	I1128 01:04:40.594715   51138 kubeadm.go:322] 
	I1128 01:04:40.594830   51138 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 01:04:40.594842   51138 kubeadm.go:322] 
	I1128 01:04:40.594934   51138 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ndoz7w.e3z39298mwqv8bxs \
	I1128 01:04:40.595080   51138 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 01:04:40.595315   51138 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 01:04:40.595355   51138 cni.go:84] Creating CNI manager for ""
	I1128 01:04:40.595371   51138 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 01:04:40.597070   51138 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 01:04:40.598403   51138 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 01:04:40.638378   51138 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 01:04:40.690207   51138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 01:04:40.690274   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:40.690315   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=auto-167798 minikube.k8s.io/updated_at=2023_11_28T01_04_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:40.763874   51138 ops.go:34] apiserver oom_adj: -16
	I1128 01:04:40.966622   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:41.078510   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:41.689338   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:42.188748   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:42.689590   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:43.189374   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:43.688683   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:44.189438   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:44.689253   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:45.189715   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:45.688906   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:46.189708   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:46.688889   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:47.189521   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:47.689525   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:48.188914   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:48.689045   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:49.188902   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:49.689750   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:50.189119   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:50.689532   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:51.188692   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:51.689256   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:52.188774   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:52.689060   51138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:04:52.890272   51138 kubeadm.go:1081] duration metric: took 12.2000542s to wait for elevateKubeSystemPrivileges.
	I1128 01:04:52.890309   51138 kubeadm.go:406] StartCluster complete in 24.748784477s
	I1128 01:04:52.890331   51138 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:52.890424   51138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 01:04:52.893421   51138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:04:52.893700   51138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 01:04:52.893708   51138 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 01:04:52.893792   51138 addons.go:69] Setting storage-provisioner=true in profile "auto-167798"
	I1128 01:04:52.893813   51138 addons.go:231] Setting addon storage-provisioner=true in "auto-167798"
	I1128 01:04:52.893878   51138 host.go:66] Checking if "auto-167798" exists ...
	I1128 01:04:52.893895   51138 config.go:182] Loaded profile config "auto-167798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:04:52.893959   51138 addons.go:69] Setting default-storageclass=true in profile "auto-167798"
	I1128 01:04:52.893994   51138 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-167798"
	I1128 01:04:52.894335   51138 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:52.894368   51138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:52.894369   51138 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:52.894394   51138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:52.910584   51138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45381
	I1128 01:04:52.911063   51138 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:52.911585   51138 main.go:141] libmachine: Using API Version  1
	I1128 01:04:52.911610   51138 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:52.912014   51138 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:52.912212   51138 main.go:141] libmachine: (auto-167798) Calling .GetState
	I1128 01:04:52.912986   51138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I1128 01:04:52.913419   51138 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:52.913977   51138 main.go:141] libmachine: Using API Version  1
	I1128 01:04:52.914005   51138 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:52.914358   51138 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:52.914986   51138 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:52.915031   51138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:52.916043   51138 addons.go:231] Setting addon default-storageclass=true in "auto-167798"
	I1128 01:04:52.916090   51138 host.go:66] Checking if "auto-167798" exists ...
	I1128 01:04:52.916393   51138 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:52.916421   51138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:52.930044   51138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I1128 01:04:52.930556   51138 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:52.931070   51138 main.go:141] libmachine: Using API Version  1
	I1128 01:04:52.931090   51138 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:52.931116   51138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I1128 01:04:52.931487   51138 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:52.931525   51138 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:52.931683   51138 main.go:141] libmachine: (auto-167798) Calling .GetState
	I1128 01:04:52.931991   51138 main.go:141] libmachine: Using API Version  1
	I1128 01:04:52.932013   51138 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:52.932370   51138 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:52.933074   51138 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:52.933108   51138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:52.934521   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:52.936491   51138 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 01:04:52.938439   51138 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 01:04:52.938458   51138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 01:04:52.938480   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:52.941795   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:52.942345   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:52.942393   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:52.942625   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:52.943045   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:52.943233   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:52.943400   51138 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa Username:docker}
	I1128 01:04:52.949584   51138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I1128 01:04:52.950063   51138 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:52.950469   51138 main.go:141] libmachine: Using API Version  1
	I1128 01:04:52.950488   51138 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:52.950868   51138 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:52.951058   51138 main.go:141] libmachine: (auto-167798) Calling .GetState
	I1128 01:04:52.952979   51138 main.go:141] libmachine: (auto-167798) Calling .DriverName
	I1128 01:04:52.953250   51138 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 01:04:52.953267   51138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 01:04:52.953288   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHHostname
	I1128 01:04:52.956453   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:52.956821   51138 main.go:141] libmachine: (auto-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:0f:21", ip: ""} in network mk-auto-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:04:08 +0000 UTC Type:0 Mac:52:54:00:58:0f:21 Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:auto-167798 Clientid:01:52:54:00:58:0f:21}
	I1128 01:04:52.956854   51138 main.go:141] libmachine: (auto-167798) DBG | domain auto-167798 has defined IP address 192.168.61.116 and MAC address 52:54:00:58:0f:21 in network mk-auto-167798
	I1128 01:04:52.957057   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHPort
	I1128 01:04:52.957243   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHKeyPath
	I1128 01:04:52.957387   51138 main.go:141] libmachine: (auto-167798) Calling .GetSSHUsername
	I1128 01:04:52.957515   51138 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/auto-167798/id_rsa Username:docker}
	W1128 01:04:53.052854   51138 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "auto-167798" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1128 01:04:53.052894   51138 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1128 01:04:53.052913   51138 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 01:04:53.054671   51138 out.go:177] * Verifying Kubernetes components...
	I1128 01:04:53.056499   51138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 01:04:53.113033   51138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 01:04:53.114448   51138 node_ready.go:35] waiting up to 15m0s for node "auto-167798" to be "Ready" ...
	I1128 01:04:53.123797   51138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 01:04:53.155837   51138 node_ready.go:49] node "auto-167798" has status "Ready":"True"
	I1128 01:04:53.155861   51138 node_ready.go:38] duration metric: took 41.381581ms waiting for node "auto-167798" to be "Ready" ...
	I1128 01:04:53.155870   51138 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 01:04:53.181195   51138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 01:04:53.202831   51138 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace to be "Ready" ...
	I1128 01:04:54.667211   51138 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.554126809s)
	I1128 01:04:54.667244   51138 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1128 01:04:55.020554   51138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.896711789s)
	I1128 01:04:55.020637   51138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.839407726s)
	I1128 01:04:55.020653   51138 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:55.020671   51138 main.go:141] libmachine: (auto-167798) Calling .Close
	I1128 01:04:55.020699   51138 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:55.020721   51138 main.go:141] libmachine: (auto-167798) Calling .Close
	I1128 01:04:55.022395   51138 main.go:141] libmachine: (auto-167798) DBG | Closing plugin on server side
	I1128 01:04:55.022421   51138 main.go:141] libmachine: (auto-167798) DBG | Closing plugin on server side
	I1128 01:04:55.022439   51138 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:55.022461   51138 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:55.022466   51138 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:55.022486   51138 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:55.022497   51138 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:55.022472   51138 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:55.022512   51138 main.go:141] libmachine: (auto-167798) Calling .Close
	I1128 01:04:55.022548   51138 main.go:141] libmachine: (auto-167798) Calling .Close
	I1128 01:04:55.022723   51138 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:55.022738   51138 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:55.022780   51138 main.go:141] libmachine: (auto-167798) DBG | Closing plugin on server side
	I1128 01:04:55.022790   51138 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:55.022803   51138 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:55.036733   51138 main.go:141] libmachine: Making call to close driver server
	I1128 01:04:55.036797   51138 main.go:141] libmachine: (auto-167798) Calling .Close
	I1128 01:04:55.037032   51138 main.go:141] libmachine: (auto-167798) DBG | Closing plugin on server side
	I1128 01:04:55.037046   51138 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:04:55.037062   51138 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:04:55.040584   51138 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1128 01:04:55.042260   51138 addons.go:502] enable addons completed in 2.148551544s: enabled=[storage-provisioner default-storageclass]
	I1128 01:04:55.440470   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:04:57.931447   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:04:59.933070   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:05:02.432451   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:05:04.433281   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:05:06.930979   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:05:08.934264   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:05:11.431278   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:05:13.437744   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:05:15.933522   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	I1128 01:05:18.431914   51138 pod_ready.go:102] pod "coredns-5dd5756b68-r6trs" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:43:06 UTC, ends at Tue 2023-11-28 01:05:22 UTC. --
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.086601567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133522086588535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=18d81a2b-33d8-4db8-aec0-d1d67eed6f26 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.087131178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a2f7e43-f5f7-4f8d-8eaf-9ff762e091e4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.087176258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a2f7e43-f5f7-4f8d-8eaf-9ff762e091e4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.087344749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3fc8bf06b33b6bd3855dc00fec4b678dda94fa211e2dd4538bc17ab34dbf4a1,PodSandboxId:a33478695b934c5d6364b0e621311747cb2966464b2dadb0b11b02937af5e152,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132516459194822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c62a8419-b0e5-4330-a49b-986693e183b2,},Annotations:map[string]string{io.kubernetes.container.hash: 19218868,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f,PodSandboxId:fc135d72f59edc39c2517d19f324e2783df33a5a7c25f81324c36a2c774e041f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132515822591632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w5ct2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ac66db-fe8d-419d-9237-b0dd4077559a,},Annotations:map[string]string{io.kubernetes.container.hash: 52abbc6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f,PodSandboxId:d664a9b1c7f81feeb7bfc090d473b4be136da266d388d5f76051731b5cc92b34,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132515168481483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kjg5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf956dfb-3a7f-4605-a849-ee887562fce5,},Annotations:map[string]string{io.kubernetes.container.hash: 8a70d9e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd,PodSandboxId:45bd0bcfb925db9d80582ec505cc9ed0a1c586eae6c418ddd0fe4c29356def77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132491472482658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d739abfe9178a563e914606688626e19,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c,PodSandboxId:404ce39b26f1718910cc1467ee65993f8ab47320b28162f232ffa82042f1535a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132491329027379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436d4e334a24347cc5d0fc652c17ba7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 1587da43,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f,PodSandboxId:0d95a526bcbfdd92225e6f2efcfc0060f71a8d296153ea7f8958a733963c0d2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132491049881693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb67ef96df179669e13da188205336d,},Annotations:map[string
]string{io.kubernetes.container.hash: 1a41d4de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed,PodSandboxId:7d666e1d1d3873fdc338af2cddf27dfd4296c2287639cc41a3009a39a18c8243,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132490996328475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338329e9c8fedff2d5801572cdf8d15
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a2f7e43-f5f7-4f8d-8eaf-9ff762e091e4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.131554139Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1a486ee2-e041-4b6a-86d7-cd57896f5e34 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.131610844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1a486ee2-e041-4b6a-86d7-cd57896f5e34 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.132759577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=08d69e03-69de-4c41-bf09-48442344ffda name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.133310426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133522133290295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=08d69e03-69de-4c41-bf09-48442344ffda name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.134226525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c8650dbe-f88e-4a2b-b2b4-535fc2db96e5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.134300846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c8650dbe-f88e-4a2b-b2b4-535fc2db96e5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.134473979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3fc8bf06b33b6bd3855dc00fec4b678dda94fa211e2dd4538bc17ab34dbf4a1,PodSandboxId:a33478695b934c5d6364b0e621311747cb2966464b2dadb0b11b02937af5e152,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132516459194822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c62a8419-b0e5-4330-a49b-986693e183b2,},Annotations:map[string]string{io.kubernetes.container.hash: 19218868,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f,PodSandboxId:fc135d72f59edc39c2517d19f324e2783df33a5a7c25f81324c36a2c774e041f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132515822591632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w5ct2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ac66db-fe8d-419d-9237-b0dd4077559a,},Annotations:map[string]string{io.kubernetes.container.hash: 52abbc6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f,PodSandboxId:d664a9b1c7f81feeb7bfc090d473b4be136da266d388d5f76051731b5cc92b34,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132515168481483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kjg5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf956dfb-3a7f-4605-a849-ee887562fce5,},Annotations:map[string]string{io.kubernetes.container.hash: 8a70d9e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd,PodSandboxId:45bd0bcfb925db9d80582ec505cc9ed0a1c586eae6c418ddd0fe4c29356def77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132491472482658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d739abfe9178a563e914606688626e19,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c,PodSandboxId:404ce39b26f1718910cc1467ee65993f8ab47320b28162f232ffa82042f1535a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132491329027379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436d4e334a24347cc5d0fc652c17ba7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 1587da43,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f,PodSandboxId:0d95a526bcbfdd92225e6f2efcfc0060f71a8d296153ea7f8958a733963c0d2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132491049881693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb67ef96df179669e13da188205336d,},Annotations:map[string
]string{io.kubernetes.container.hash: 1a41d4de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed,PodSandboxId:7d666e1d1d3873fdc338af2cddf27dfd4296c2287639cc41a3009a39a18c8243,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132490996328475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338329e9c8fedff2d5801572cdf8d15
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c8650dbe-f88e-4a2b-b2b4-535fc2db96e5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.178198849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=12f58480-9840-4fb9-bffa-ebd0161f6128 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.178282721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=12f58480-9840-4fb9-bffa-ebd0161f6128 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.179684254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=73c7830c-fa6b-4b49-9fd8-8f619915cb65 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.182989294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133522182794706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=73c7830c-fa6b-4b49-9fd8-8f619915cb65 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.183781730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=50760451-36f7-4d65-b645-8c6367a220e4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.183850787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=50760451-36f7-4d65-b645-8c6367a220e4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.184005086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3fc8bf06b33b6bd3855dc00fec4b678dda94fa211e2dd4538bc17ab34dbf4a1,PodSandboxId:a33478695b934c5d6364b0e621311747cb2966464b2dadb0b11b02937af5e152,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132516459194822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c62a8419-b0e5-4330-a49b-986693e183b2,},Annotations:map[string]string{io.kubernetes.container.hash: 19218868,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f,PodSandboxId:fc135d72f59edc39c2517d19f324e2783df33a5a7c25f81324c36a2c774e041f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132515822591632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w5ct2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ac66db-fe8d-419d-9237-b0dd4077559a,},Annotations:map[string]string{io.kubernetes.container.hash: 52abbc6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f,PodSandboxId:d664a9b1c7f81feeb7bfc090d473b4be136da266d388d5f76051731b5cc92b34,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132515168481483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kjg5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf956dfb-3a7f-4605-a849-ee887562fce5,},Annotations:map[string]string{io.kubernetes.container.hash: 8a70d9e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd,PodSandboxId:45bd0bcfb925db9d80582ec505cc9ed0a1c586eae6c418ddd0fe4c29356def77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132491472482658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d739abfe9178a563e914606688626e19,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c,PodSandboxId:404ce39b26f1718910cc1467ee65993f8ab47320b28162f232ffa82042f1535a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132491329027379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436d4e334a24347cc5d0fc652c17ba7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 1587da43,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f,PodSandboxId:0d95a526bcbfdd92225e6f2efcfc0060f71a8d296153ea7f8958a733963c0d2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132491049881693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb67ef96df179669e13da188205336d,},Annotations:map[string
]string{io.kubernetes.container.hash: 1a41d4de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed,PodSandboxId:7d666e1d1d3873fdc338af2cddf27dfd4296c2287639cc41a3009a39a18c8243,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132490996328475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338329e9c8fedff2d5801572cdf8d15
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=50760451-36f7-4d65-b645-8c6367a220e4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.220889728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cad236e4-48f1-40fd-ab64-f0aedbbd72b5 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.220981058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cad236e4-48f1-40fd-ab64-f0aedbbd72b5 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.222599553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c729d241-138a-4b18-b05f-fb29c8bdb5aa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.222972022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133522222959322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c729d241-138a-4b18-b05f-fb29c8bdb5aa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.223915678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=47a2aaad-d904-4cf7-a453-9a6ac2826334 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.223994832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=47a2aaad-d904-4cf7-a453-9a6ac2826334 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:05:22 embed-certs-304541 crio[718]: time="2023-11-28 01:05:22.224231929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e3fc8bf06b33b6bd3855dc00fec4b678dda94fa211e2dd4538bc17ab34dbf4a1,PodSandboxId:a33478695b934c5d6364b0e621311747cb2966464b2dadb0b11b02937af5e152,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132516459194822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c62a8419-b0e5-4330-a49b-986693e183b2,},Annotations:map[string]string{io.kubernetes.container.hash: 19218868,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f,PodSandboxId:fc135d72f59edc39c2517d19f324e2783df33a5a7c25f81324c36a2c774e041f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132515822591632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w5ct2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ac66db-fe8d-419d-9237-b0dd4077559a,},Annotations:map[string]string{io.kubernetes.container.hash: 52abbc6d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f,PodSandboxId:d664a9b1c7f81feeb7bfc090d473b4be136da266d388d5f76051731b5cc92b34,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132515168481483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kjg5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf956dfb-3a7f-4605-a849-ee887562fce5,},Annotations:map[string]string{io.kubernetes.container.hash: 8a70d9e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd,PodSandboxId:45bd0bcfb925db9d80582ec505cc9ed0a1c586eae6c418ddd0fe4c29356def77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132491472482658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: d739abfe9178a563e914606688626e19,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c,PodSandboxId:404ce39b26f1718910cc1467ee65993f8ab47320b28162f232ffa82042f1535a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132491329027379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436d4e334a24347cc5d0fc652c17ba7b,},Annotations:
map[string]string{io.kubernetes.container.hash: 1587da43,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f,PodSandboxId:0d95a526bcbfdd92225e6f2efcfc0060f71a8d296153ea7f8958a733963c0d2e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132491049881693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adb67ef96df179669e13da188205336d,},Annotations:map[string
]string{io.kubernetes.container.hash: 1a41d4de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed,PodSandboxId:7d666e1d1d3873fdc338af2cddf27dfd4296c2287639cc41a3009a39a18c8243,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132490996328475,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-304541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 338329e9c8fedff2d5801572cdf8d15
5,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=47a2aaad-d904-4cf7-a453-9a6ac2826334 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e3fc8bf06b33b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   a33478695b934       storage-provisioner
	6511a68179cfc       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 minutes ago      Running             kube-proxy                0                   fc135d72f59ed       kube-proxy-w5ct2
	e59d10fb9061b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   d664a9b1c7f81       coredns-5dd5756b68-kjg5f
	bcbd9b61aa21b       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   17 minutes ago      Running             kube-scheduler            2                   45bd0bcfb925d       kube-scheduler-embed-certs-304541
	9eef8dc0f07ce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   17 minutes ago      Running             etcd                      2                   404ce39b26f17       etcd-embed-certs-304541
	83b4ead516cfc       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   17 minutes ago      Running             kube-apiserver            2                   0d95a526bcbfd       kube-apiserver-embed-certs-304541
	c6c2dc2b090d3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   17 minutes ago      Running             kube-controller-manager   2                   7d666e1d1d387       kube-controller-manager-embed-certs-304541
	
	* 
	* ==> coredns [e59d10fb9061bc959669d14d9bd0b2c9a179dc9522e5de19db2952745217739f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33774 - 6383 "HINFO IN 7828742696619454455.1271914779107748957. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022305153s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-304541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-304541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=embed-certs-304541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_48_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:48:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-304541
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 01:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 01:03:59 +0000   Tue, 28 Nov 2023 00:48:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 01:03:59 +0000   Tue, 28 Nov 2023 00:48:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 01:03:59 +0000   Tue, 28 Nov 2023 00:48:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 01:03:59 +0000   Tue, 28 Nov 2023 00:48:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.93
	  Hostname:    embed-certs-304541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 da1ff02b83bc434190c2ec845a961bf6
	  System UUID:                da1ff02b-83bc-4341-90c2-ec845a961bf6
	  Boot ID:                    07a8ef9b-7aeb-4f02-abdc-d4b060d69676
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-kjg5f                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-304541                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-embed-certs-304541             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-embed-certs-304541    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-w5ct2                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-304541             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-57f55c9bc5-xzz2t               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node embed-certs-304541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node embed-certs-304541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node embed-certs-304541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node embed-certs-304541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node embed-certs-304541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node embed-certs-304541 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             17m                kubelet          Node embed-certs-304541 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                17m                kubelet          Node embed-certs-304541 status is now: NodeReady
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-304541 event: Registered Node embed-certs-304541 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 00:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068406] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Nov28 00:43] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.404011] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147180] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000008] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.399396] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.605731] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.124990] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.153846] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.109384] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.216137] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +16.978013] systemd-fstab-generator[918]: Ignoring "noauto" for root device
	[ +20.230386] kauditd_printk_skb: 29 callbacks suppressed
	[Nov28 00:48] systemd-fstab-generator[3509]: Ignoring "noauto" for root device
	[  +9.781826] systemd-fstab-generator[3829]: Ignoring "noauto" for root device
	[ +13.478507] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.110837] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [9eef8dc0f07ce7945876e782eef7f5863d8bfe65abe904b3ad26a3dab24cd57c] <==
	* {"level":"info","ts":"2023-11-28T00:48:13.950403Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:13.950422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T00:48:13.950429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T00:58:14.275901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":713}
	{"level":"info","ts":"2023-11-28T00:58:14.278788Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":713,"took":"2.046151ms","hash":503782246}
	{"level":"info","ts":"2023-11-28T00:58:14.278895Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":503782246,"revision":713,"compact-revision":-1}
	{"level":"info","ts":"2023-11-28T01:03:14.289742Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2023-11-28T01:03:14.29271Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":957,"took":"2.054374ms","hash":1870842877}
	{"level":"info","ts":"2023-11-28T01:03:14.2929Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1870842877,"revision":957,"compact-revision":713}
	{"level":"warn","ts":"2023-11-28T01:03:58.083912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.124738ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3738704655612524280 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-304541\" mod_revision:1229 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-304541\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-304541\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T01:03:58.084446Z","caller":"traceutil/trace.go:171","msg":"trace[379628766] linearizableReadLoop","detail":"{readStateIndex:1445; appliedIndex:1443; }","duration":"315.684185ms","start":"2023-11-28T01:03:57.768714Z","end":"2023-11-28T01:03:58.084398Z","steps":["trace[379628766] 'read index received'  (duration: 76.519887ms)","trace[379628766] 'applied index is now lower than readState.Index'  (duration: 239.163346ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-28T01:03:58.084545Z","caller":"traceutil/trace.go:171","msg":"trace[1838559540] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"439.080231ms","start":"2023-11-28T01:03:57.645443Z","end":"2023-11-28T01:03:58.084524Z","steps":["trace[1838559540] 'process raft request'  (duration: 199.779359ms)","trace[1838559540] 'compare'  (duration: 236.783211ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T01:03:58.084625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"315.916269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T01:03:58.084748Z","caller":"traceutil/trace.go:171","msg":"trace[175085508] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1237; }","duration":"316.047518ms","start":"2023-11-28T01:03:57.768689Z","end":"2023-11-28T01:03:58.084737Z","steps":["trace[175085508] 'agreement among raft nodes before linearized reading'  (duration: 315.798148ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T01:03:58.084639Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T01:03:57.645426Z","time spent":"439.17331ms","remote":"127.0.0.1:40784","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-304541\" mod_revision:1229 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-304541\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-304541\" > >"}
	{"level":"info","ts":"2023-11-28T01:03:58.084832Z","caller":"traceutil/trace.go:171","msg":"trace[1724783806] transaction","detail":"{read_only:false; response_revision:1237; number_of_response:1; }","duration":"438.065321ms","start":"2023-11-28T01:03:57.646758Z","end":"2023-11-28T01:03:58.084823Z","steps":["trace[1724783806] 'process raft request'  (duration: 437.525042ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T01:03:58.084925Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T01:03:57.646747Z","time spent":"438.14584ms","remote":"127.0.0.1:40730","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.93\" mod_revision:1228 > success:<request_put:<key:\"/registry/masterleases/192.168.50.93\" value_size:66 lease:3738704655612524278 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.93\" > >"}
	{"level":"warn","ts":"2023-11-28T01:03:58.084792Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T01:03:57.768673Z","time spent":"316.108226ms","remote":"127.0.0.1:40722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-11-28T01:03:58.345577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.175456ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3738704655612524288 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1235 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T01:03:58.345748Z","caller":"traceutil/trace.go:171","msg":"trace[939991667] transaction","detail":"{read_only:false; response_revision:1238; number_of_response:1; }","duration":"206.249723ms","start":"2023-11-28T01:03:58.139481Z","end":"2023-11-28T01:03:58.345731Z","steps":["trace[939991667] 'process raft request'  (duration: 66.852636ms)","trace[939991667] 'compare'  (duration: 138.937151ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T01:03:58.614455Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.703796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T01:03:58.614728Z","caller":"traceutil/trace.go:171","msg":"trace[1410344335] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1238; }","duration":"165.973966ms","start":"2023-11-28T01:03:58.448725Z","end":"2023-11-28T01:03:58.614699Z","steps":["trace[1410344335] 'range keys from in-memory index tree'  (duration: 165.586955ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T01:03:59.532712Z","caller":"traceutil/trace.go:171","msg":"trace[101431198] transaction","detail":"{read_only:false; response_revision:1239; number_of_response:1; }","duration":"252.86361ms","start":"2023-11-28T01:03:59.279834Z","end":"2023-11-28T01:03:59.532698Z","steps":["trace[101431198] 'process raft request'  (duration: 252.446485ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T01:04:28.108939Z","caller":"traceutil/trace.go:171","msg":"trace[1433527149] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"479.916612ms","start":"2023-11-28T01:04:27.62901Z","end":"2023-11-28T01:04:28.108926Z","steps":["trace[1433527149] 'process raft request'  (duration: 479.717859ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T01:04:28.109137Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T01:04:27.628992Z","time spent":"480.030819ms","remote":"127.0.0.1:40730","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.93\" mod_revision:1253 > success:<request_put:<key:\"/registry/masterleases/192.168.50.93\" value_size:66 lease:3738704655612524428 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.93\" > >"}
	
	* 
	* ==> kernel <==
	*  01:05:22 up 22 min,  0 users,  load average: 0.29, 0.24, 0.18
	Linux embed-certs-304541 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [83b4ead516cfcc84bde5af39e3631dd4594ab102e6ab6e1aeba10747d1c88d0f] <==
	* I1128 01:03:15.806396       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 01:03:16.806121       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:03:16.806147       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 01:03:16.806156       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 01:03:16.806191       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:03:16.806246       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 01:03:16.807401       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 01:03:58.086684       1 trace.go:236] Trace[958365899]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.93,type:*v1.Endpoints,resource:apiServerIPInfo (28-Nov-2023 01:03:57.546) (total time: 540ms):
	Trace[958365899]: ---"Transaction prepared" 98ms (01:03:57.646)
	Trace[958365899]: ---"Txn call completed" 440ms (01:03:58.086)
	Trace[958365899]: [540.268553ms] [540.268553ms] END
	I1128 01:04:15.714864       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 01:04:16.807355       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:04:16.807471       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 01:04:16.807499       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 01:04:16.807601       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:04:16.807665       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 01:04:16.808546       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 01:04:28.109675       1 trace.go:236] Trace[1586136511]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.93,type:*v1.Endpoints,resource:apiServerIPInfo (28-Nov-2023 01:04:27.547) (total time: 561ms):
	Trace[1586136511]: ---"Transaction prepared" 79ms (01:04:27.628)
	Trace[1586136511]: ---"Txn call completed" 481ms (01:04:28.109)
	Trace[1586136511]: [561.676691ms] [561.676691ms] END
	I1128 01:05:15.714413       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [c6c2dc2b090d3ebe0198e4f0617f64f77e33c67c72770d33d0a98646fa8840ed] <==
	* I1128 00:59:51.199531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="137.389µs"
	E1128 01:00:01.053514       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:00:01.539163       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:00:31.061666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:00:31.548833       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:01:01.068139       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:01:01.557621       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:01:31.075445       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:01:31.567676       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:02:01.081283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:02:01.577636       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:02:31.087468       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:02:31.589429       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:03:01.093023       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:03:01.597644       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:03:31.101779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:03:31.607108       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:04:01.110280       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:04:01.618210       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:04:31.117690       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:04:31.628242       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 01:04:40.200027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="341.456µs"
	I1128 01:04:55.199213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="255.605µs"
	E1128 01:05:01.123765       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:05:01.637267       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [6511a68179cfc712850559fdc55e8bd8bb67a9852597321494d0339ebbb4099f] <==
	* I1128 00:48:36.608546       1 server_others.go:69] "Using iptables proxy"
	I1128 00:48:36.637875       1 node.go:141] Successfully retrieved node IP: 192.168.50.93
	I1128 00:48:36.742316       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 00:48:36.742544       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 00:48:36.750635       1 server_others.go:152] "Using iptables Proxier"
	I1128 00:48:36.751138       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 00:48:36.752334       1 server.go:846] "Version info" version="v1.28.4"
	I1128 00:48:36.752378       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:48:36.758108       1 config.go:188] "Starting service config controller"
	I1128 00:48:36.758125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 00:48:36.758246       1 config.go:97] "Starting endpoint slice config controller"
	I1128 00:48:36.758252       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 00:48:36.758855       1 config.go:315] "Starting node config controller"
	I1128 00:48:36.758866       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 00:48:36.859358       1 shared_informer.go:318] Caches are synced for service config
	I1128 00:48:36.859608       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 00:48:36.860293       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [bcbd9b61aa21b42886e337807c7bdda8c90f1e19b4dde0b4a89273c7ff8f95cd] <==
	* W1128 00:48:15.871468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 00:48:15.871510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 00:48:15.871699       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 00:48:15.871732       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:48:16.693324       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 00:48:16.693438       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 00:48:16.766236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 00:48:16.766293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 00:48:16.823319       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1128 00:48:16.823387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1128 00:48:16.835702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 00:48:16.835832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1128 00:48:16.854618       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 00:48:16.854743       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:48:16.875626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:48:16.875772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 00:48:16.968221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:16.968352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:17.014483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 00:48:17.014536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1128 00:48:17.052162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:48:17.052249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 00:48:17.112364       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:17.112674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1128 00:48:19.247226       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:43:06 UTC, ends at Tue 2023-11-28 01:05:22 UTC. --
	Nov 28 01:03:19 embed-certs-304541 kubelet[3836]: E1128 01:03:19.261190    3836 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 01:03:19 embed-certs-304541 kubelet[3836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 01:03:19 embed-certs-304541 kubelet[3836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 01:03:19 embed-certs-304541 kubelet[3836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 01:03:19 embed-certs-304541 kubelet[3836]: E1128 01:03:19.379711    3836 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Nov 28 01:03:30 embed-certs-304541 kubelet[3836]: E1128 01:03:30.181551    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:03:41 embed-certs-304541 kubelet[3836]: E1128 01:03:41.182251    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:03:52 embed-certs-304541 kubelet[3836]: E1128 01:03:52.182389    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:04:04 embed-certs-304541 kubelet[3836]: E1128 01:04:04.182224    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:04:15 embed-certs-304541 kubelet[3836]: E1128 01:04:15.182274    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:04:19 embed-certs-304541 kubelet[3836]: E1128 01:04:19.262430    3836 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 01:04:19 embed-certs-304541 kubelet[3836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 01:04:19 embed-certs-304541 kubelet[3836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 01:04:19 embed-certs-304541 kubelet[3836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 01:04:26 embed-certs-304541 kubelet[3836]: E1128 01:04:26.207156    3836 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 28 01:04:26 embed-certs-304541 kubelet[3836]: E1128 01:04:26.207230    3836 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 28 01:04:26 embed-certs-304541 kubelet[3836]: E1128 01:04:26.207459    3836 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-n5dpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-xzz2t_kube-system(926e9a40-f0fe-47ea-8e44-6816132ec0c2): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 01:04:26 embed-certs-304541 kubelet[3836]: E1128 01:04:26.207572    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:04:40 embed-certs-304541 kubelet[3836]: E1128 01:04:40.182144    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:04:55 embed-certs-304541 kubelet[3836]: E1128 01:04:55.184391    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:05:10 embed-certs-304541 kubelet[3836]: E1128 01:05:10.182095    3836 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xzz2t" podUID="926e9a40-f0fe-47ea-8e44-6816132ec0c2"
	Nov 28 01:05:19 embed-certs-304541 kubelet[3836]: E1128 01:05:19.261308    3836 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 01:05:19 embed-certs-304541 kubelet[3836]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 01:05:19 embed-certs-304541 kubelet[3836]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 01:05:19 embed-certs-304541 kubelet[3836]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [e3fc8bf06b33b6bd3855dc00fec4b678dda94fa211e2dd4538bc17ab34dbf4a1] <==
	* I1128 00:48:36.693977       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:48:36.708872       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:48:36.708940       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 00:48:36.723749       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 00:48:36.723968       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-304541_322210ba-84b6-48ab-aefc-b0ff548de6df!
	I1128 00:48:36.725214       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"469eb09c-dd9e-49b7-864d-91bb452a3562", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-304541_322210ba-84b6-48ab-aefc-b0ff548de6df became leader
	I1128 00:48:36.824644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-304541_322210ba-84b6-48ab-aefc-b0ff548de6df!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-304541 -n embed-certs-304541
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-304541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xzz2t
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-304541 describe pod metrics-server-57f55c9bc5-xzz2t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-304541 describe pod metrics-server-57f55c9bc5-xzz2t: exit status 1 (62.452562ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xzz2t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-304541 describe pod metrics-server-57f55c9bc5-xzz2t: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (461.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-28 01:06:54.756228894 +0000 UTC m=+6116.321255754
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-488423 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-488423 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (71.981842ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-488423 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
E1128 01:06:54.891045   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-488423 logs -n 25
E1128 01:06:55.432770   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 01:06:55.461974   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:06:55.467269   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:06:55.477586   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:06:55.498473   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:06:55.538780   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:06:55.619155   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:06:55.779504   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:06:56.100160   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-488423 logs -n 25: (1.345728082s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:05 UTC | 28 Nov 23 01:05 UTC |
	|         | cat kubelet --no-pager                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo journalctl                         | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:05 UTC | 28 Nov 23 01:05 UTC |
	|         | -xeu kubelet --all --full                              |                       |         |         |                     |                     |
	|         | --no-pager                                             |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo cat                                | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:05 UTC | 28 Nov 23 01:05 UTC |
	|         | /etc/kubernetes/kubelet.conf                           |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo cat                                | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:05 UTC | 28 Nov 23 01:05 UTC |
	|         | /var/lib/kubelet/config.yaml                           |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:05 UTC |                     |
	|         | status docker --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                             |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:05 UTC | 28 Nov 23 01:06 UTC |
	|         | cat docker --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo cat                                | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | /etc/docker/daemon.json                                |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo docker                             | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC |                     |
	|         | system info                                            |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC |                     |
	|         | status cri-docker --all --full                         |                       |         |         |                     |                     |
	|         | --no-pager                                             |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | cat cri-docker --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo cat                                | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo cat                                | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo                                    | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | cri-dockerd --version                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC |                     |
	|         | status containerd --all --full                         |                       |         |         |                     |                     |
	|         | --no-pager                                             |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | cat containerd --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo cat                                | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | /lib/systemd/system/containerd.service                 |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo cat                                | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | /etc/containerd/config.toml                            |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo containerd                         | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | config dump                                            |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | status crio --all --full                               |                       |         |         |                     |                     |
	|         | --no-pager                                             |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo systemctl                          | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | cat crio --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo find                               | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                       |         |         |                     |                     |
	| ssh     | -p auto-167798 sudo crio                               | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	|         | config                                                 |                       |         |         |                     |                     |
	| delete  | -p auto-167798                                         | auto-167798           | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC | 28 Nov 23 01:06 UTC |
	| start   | -p custom-flannel-167798                               | custom-flannel-167798 | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC |                     |
	|         | --memory=3072 --alsologtostderr                        |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                         |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                       |                       |         |         |                     |                     |
	|         | --driver=kvm2                                          |                       |         |         |                     |                     |
	|         | --container-runtime=crio                               |                       |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-517109                  | newest-cni-517109     | jenkins | v1.32.0 | 28 Nov 23 01:06 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                       |         |         |                     |                     |
	|---------|--------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 01:06:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 01:06:05.332057   53588 out.go:296] Setting OutFile to fd 1 ...
	I1128 01:06:05.332197   53588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 01:06:05.332207   53588 out.go:309] Setting ErrFile to fd 2...
	I1128 01:06:05.332211   53588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 01:06:05.332430   53588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 01:06:05.333240   53588 out.go:303] Setting JSON to false
	I1128 01:06:05.334346   53588 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6512,"bootTime":1701127053,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 01:06:05.334408   53588 start.go:138] virtualization: kvm guest
	I1128 01:06:05.336810   53588 out.go:177] * [custom-flannel-167798] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 01:06:05.338412   53588 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 01:06:05.340086   53588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 01:06:05.338417   53588 notify.go:220] Checking for updates...
	I1128 01:06:05.343103   53588 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 01:06:05.344602   53588 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:06:05.346117   53588 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 01:06:05.348131   53588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 01:06:05.350224   53588 config.go:182] Loaded profile config "calico-167798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:06:05.350332   53588 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:06:05.350430   53588 config.go:182] Loaded profile config "newest-cni-517109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 01:06:05.350516   53588 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 01:06:05.388228   53588 out.go:177] * Using the kvm2 driver based on user configuration
	I1128 01:06:05.389736   53588 start.go:298] selected driver: kvm2
	I1128 01:06:05.389753   53588 start.go:902] validating driver "kvm2" against <nil>
	I1128 01:06:05.389763   53588 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 01:06:05.390443   53588 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 01:06:05.390516   53588 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 01:06:05.406176   53588 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 01:06:05.406232   53588 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 01:06:05.406416   53588 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 01:06:05.406477   53588 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1128 01:06:05.406488   53588 start_flags.go:318] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1128 01:06:05.406496   53588 start_flags.go:323] config:
	{Name:custom-flannel-167798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-167798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s G
PUs:}
	I1128 01:06:05.406617   53588 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 01:06:05.409135   53588 out.go:177] * Starting control plane node custom-flannel-167798 in cluster custom-flannel-167798
	I1128 01:06:05.411166   53588 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 01:06:05.411209   53588 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 01:06:05.411222   53588 cache.go:56] Caching tarball of preloaded images
	I1128 01:06:05.411302   53588 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 01:06:05.411318   53588 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 01:06:05.411459   53588 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/config.json ...
	I1128 01:06:05.411487   53588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/config.json: {Name:mkeb79ccd36f9d737c1f7f5beed6964e410d6444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:06:05.411639   53588 start.go:365] acquiring machines lock for custom-flannel-167798: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 01:06:05.411679   53588 start.go:369] acquired machines lock for "custom-flannel-167798" in 17.103µs
	I1128 01:06:05.411702   53588 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-167798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-167798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 01:06:05.411799   53588 start.go:125] createHost starting for "" (driver="kvm2")
	I1128 01:06:05.413635   53588 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1128 01:06:05.413784   53588 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:06:05.413831   53588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:06:05.429657   53588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36345
	I1128 01:06:05.430114   53588 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:06:05.430672   53588 main.go:141] libmachine: Using API Version  1
	I1128 01:06:05.430697   53588 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:06:05.431023   53588 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:06:05.431238   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetMachineName
	I1128 01:06:05.431411   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .DriverName
	I1128 01:06:05.431569   53588 start.go:159] libmachine.API.Create for "custom-flannel-167798" (driver="kvm2")
	I1128 01:06:05.431598   53588 client.go:168] LocalClient.Create starting
	I1128 01:06:05.431643   53588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem
	I1128 01:06:05.431703   53588 main.go:141] libmachine: Decoding PEM data...
	I1128 01:06:05.431729   53588 main.go:141] libmachine: Parsing certificate...
	I1128 01:06:05.431795   53588 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem
	I1128 01:06:05.431828   53588 main.go:141] libmachine: Decoding PEM data...
	I1128 01:06:05.431850   53588 main.go:141] libmachine: Parsing certificate...
	I1128 01:06:05.431882   53588 main.go:141] libmachine: Running pre-create checks...
	I1128 01:06:05.431896   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .PreCreateCheck
	I1128 01:06:05.432668   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetConfigRaw
	I1128 01:06:05.434368   53588 main.go:141] libmachine: Creating machine...
	I1128 01:06:05.434389   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .Create
	I1128 01:06:05.434510   53588 main.go:141] libmachine: (custom-flannel-167798) Creating KVM machine...
	I1128 01:06:05.435895   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found existing default KVM network
	I1128 01:06:05.437166   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:05.437002   53610 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:d8:c6} reservation:<nil>}
	I1128 01:06:05.438127   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:05.438031   53610 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:c0:71} reservation:<nil>}
	I1128 01:06:05.439241   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:05.439158   53610 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a9170}
	I1128 01:06:05.444393   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | trying to create private KVM network mk-custom-flannel-167798 192.168.61.0/24...
	I1128 01:06:05.521130   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | private KVM network mk-custom-flannel-167798 192.168.61.0/24 created
	I1128 01:06:05.521172   53588 main.go:141] libmachine: (custom-flannel-167798) Setting up store path in /home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798 ...
	I1128 01:06:05.521187   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:05.521119   53610 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:06:05.521225   53588 main.go:141] libmachine: (custom-flannel-167798) Building disk image from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso
	I1128 01:06:05.521299   53588 main.go:141] libmachine: (custom-flannel-167798) Downloading /home/jenkins/minikube-integration/17206-4749/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso...
	I1128 01:06:05.748265   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:05.748060   53610 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/id_rsa...
	I1128 01:06:06.022040   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:06.021900   53610 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/custom-flannel-167798.rawdisk...
	I1128 01:06:06.022083   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Writing magic tar header
	I1128 01:06:06.022106   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Writing SSH key tar header
	I1128 01:06:06.022120   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:06.022088   53610 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798 ...
	I1128 01:06:06.022273   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798
	I1128 01:06:06.022306   53588 main.go:141] libmachine: (custom-flannel-167798) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798 (perms=drwx------)
	I1128 01:06:06.022321   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube/machines
	I1128 01:06:06.022340   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:06:06.022355   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17206-4749
	I1128 01:06:06.022372   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1128 01:06:06.022389   53588 main.go:141] libmachine: (custom-flannel-167798) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube/machines (perms=drwxr-xr-x)
	I1128 01:06:06.022404   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Checking permissions on dir: /home/jenkins
	I1128 01:06:06.022419   53588 main.go:141] libmachine: (custom-flannel-167798) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749/.minikube (perms=drwxr-xr-x)
	I1128 01:06:06.022432   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Checking permissions on dir: /home
	I1128 01:06:06.022447   53588 main.go:141] libmachine: (custom-flannel-167798) Setting executable bit set on /home/jenkins/minikube-integration/17206-4749 (perms=drwxrwxr-x)
	I1128 01:06:06.022465   53588 main.go:141] libmachine: (custom-flannel-167798) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1128 01:06:06.022480   53588 main.go:141] libmachine: (custom-flannel-167798) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1128 01:06:06.022497   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Skipping /home - not owner
	I1128 01:06:06.022507   53588 main.go:141] libmachine: (custom-flannel-167798) Creating domain...
	I1128 01:06:06.023934   53588 main.go:141] libmachine: (custom-flannel-167798) define libvirt domain using xml: 
	I1128 01:06:06.023955   53588 main.go:141] libmachine: (custom-flannel-167798) <domain type='kvm'>
	I1128 01:06:06.023991   53588 main.go:141] libmachine: (custom-flannel-167798)   <name>custom-flannel-167798</name>
	I1128 01:06:06.024016   53588 main.go:141] libmachine: (custom-flannel-167798)   <memory unit='MiB'>3072</memory>
	I1128 01:06:06.024031   53588 main.go:141] libmachine: (custom-flannel-167798)   <vcpu>2</vcpu>
	I1128 01:06:06.024042   53588 main.go:141] libmachine: (custom-flannel-167798)   <features>
	I1128 01:06:06.024051   53588 main.go:141] libmachine: (custom-flannel-167798)     <acpi/>
	I1128 01:06:06.024059   53588 main.go:141] libmachine: (custom-flannel-167798)     <apic/>
	I1128 01:06:06.024073   53588 main.go:141] libmachine: (custom-flannel-167798)     <pae/>
	I1128 01:06:06.024088   53588 main.go:141] libmachine: (custom-flannel-167798)     
	I1128 01:06:06.024101   53588 main.go:141] libmachine: (custom-flannel-167798)   </features>
	I1128 01:06:06.024110   53588 main.go:141] libmachine: (custom-flannel-167798)   <cpu mode='host-passthrough'>
	I1128 01:06:06.024124   53588 main.go:141] libmachine: (custom-flannel-167798)   
	I1128 01:06:06.024136   53588 main.go:141] libmachine: (custom-flannel-167798)   </cpu>
	I1128 01:06:06.024149   53588 main.go:141] libmachine: (custom-flannel-167798)   <os>
	I1128 01:06:06.024164   53588 main.go:141] libmachine: (custom-flannel-167798)     <type>hvm</type>
	I1128 01:06:06.024176   53588 main.go:141] libmachine: (custom-flannel-167798)     <boot dev='cdrom'/>
	I1128 01:06:06.024206   53588 main.go:141] libmachine: (custom-flannel-167798)     <boot dev='hd'/>
	I1128 01:06:06.024224   53588 main.go:141] libmachine: (custom-flannel-167798)     <bootmenu enable='no'/>
	I1128 01:06:06.024235   53588 main.go:141] libmachine: (custom-flannel-167798)   </os>
	I1128 01:06:06.024261   53588 main.go:141] libmachine: (custom-flannel-167798)   <devices>
	I1128 01:06:06.024360   53588 main.go:141] libmachine: (custom-flannel-167798)     <disk type='file' device='cdrom'>
	I1128 01:06:06.024401   53588 main.go:141] libmachine: (custom-flannel-167798)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/boot2docker.iso'/>
	I1128 01:06:06.024425   53588 main.go:141] libmachine: (custom-flannel-167798)       <target dev='hdc' bus='scsi'/>
	I1128 01:06:06.024441   53588 main.go:141] libmachine: (custom-flannel-167798)       <readonly/>
	I1128 01:06:06.024455   53588 main.go:141] libmachine: (custom-flannel-167798)     </disk>
	I1128 01:06:06.024472   53588 main.go:141] libmachine: (custom-flannel-167798)     <disk type='file' device='disk'>
	I1128 01:06:06.024486   53588 main.go:141] libmachine: (custom-flannel-167798)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1128 01:06:06.024511   53588 main.go:141] libmachine: (custom-flannel-167798)       <source file='/home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/custom-flannel-167798.rawdisk'/>
	I1128 01:06:06.024541   53588 main.go:141] libmachine: (custom-flannel-167798)       <target dev='hda' bus='virtio'/>
	I1128 01:06:06.024556   53588 main.go:141] libmachine: (custom-flannel-167798)     </disk>
	I1128 01:06:06.024565   53588 main.go:141] libmachine: (custom-flannel-167798)     <interface type='network'>
	I1128 01:06:06.024592   53588 main.go:141] libmachine: (custom-flannel-167798)       <source network='mk-custom-flannel-167798'/>
	I1128 01:06:06.024612   53588 main.go:141] libmachine: (custom-flannel-167798)       <model type='virtio'/>
	I1128 01:06:06.024637   53588 main.go:141] libmachine: (custom-flannel-167798)     </interface>
	I1128 01:06:06.024653   53588 main.go:141] libmachine: (custom-flannel-167798)     <interface type='network'>
	I1128 01:06:06.024679   53588 main.go:141] libmachine: (custom-flannel-167798)       <source network='default'/>
	I1128 01:06:06.024699   53588 main.go:141] libmachine: (custom-flannel-167798)       <model type='virtio'/>
	I1128 01:06:06.024715   53588 main.go:141] libmachine: (custom-flannel-167798)     </interface>
	I1128 01:06:06.024729   53588 main.go:141] libmachine: (custom-flannel-167798)     <serial type='pty'>
	I1128 01:06:06.024745   53588 main.go:141] libmachine: (custom-flannel-167798)       <target port='0'/>
	I1128 01:06:06.024780   53588 main.go:141] libmachine: (custom-flannel-167798)     </serial>
	I1128 01:06:06.024800   53588 main.go:141] libmachine: (custom-flannel-167798)     <console type='pty'>
	I1128 01:06:06.024817   53588 main.go:141] libmachine: (custom-flannel-167798)       <target type='serial' port='0'/>
	I1128 01:06:06.024830   53588 main.go:141] libmachine: (custom-flannel-167798)     </console>
	I1128 01:06:06.024843   53588 main.go:141] libmachine: (custom-flannel-167798)     <rng model='virtio'>
	I1128 01:06:06.024855   53588 main.go:141] libmachine: (custom-flannel-167798)       <backend model='random'>/dev/random</backend>
	I1128 01:06:06.024867   53588 main.go:141] libmachine: (custom-flannel-167798)     </rng>
	I1128 01:06:06.024883   53588 main.go:141] libmachine: (custom-flannel-167798)     
	I1128 01:06:06.024896   53588 main.go:141] libmachine: (custom-flannel-167798)     
	I1128 01:06:06.024912   53588 main.go:141] libmachine: (custom-flannel-167798)   </devices>
	I1128 01:06:06.024924   53588 main.go:141] libmachine: (custom-flannel-167798) </domain>
	I1128 01:06:06.024939   53588 main.go:141] libmachine: (custom-flannel-167798) 
	I1128 01:06:06.028966   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:14:70:fb in network default
	I1128 01:06:06.029545   53588 main.go:141] libmachine: (custom-flannel-167798) Ensuring networks are active...
	I1128 01:06:06.029577   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:06.030321   53588 main.go:141] libmachine: (custom-flannel-167798) Ensuring network default is active
	I1128 01:06:06.030636   53588 main.go:141] libmachine: (custom-flannel-167798) Ensuring network mk-custom-flannel-167798 is active
	I1128 01:06:06.031089   53588 main.go:141] libmachine: (custom-flannel-167798) Getting domain xml...
	I1128 01:06:06.031743   53588 main.go:141] libmachine: (custom-flannel-167798) Creating domain...
	I1128 01:06:07.359393   53588 main.go:141] libmachine: (custom-flannel-167798) Waiting to get IP...
	I1128 01:06:07.360308   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:07.360860   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:07.360909   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:07.360836   53610 retry.go:31] will retry after 193.833916ms: waiting for machine to come up
	I1128 01:06:07.556277   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:07.556740   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:07.556782   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:07.556687   53610 retry.go:31] will retry after 263.202592ms: waiting for machine to come up
	I1128 01:06:07.822297   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:07.822840   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:07.822873   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:07.822792   53610 retry.go:31] will retry after 487.353714ms: waiting for machine to come up
	I1128 01:06:08.311578   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:08.312086   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:08.312118   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:08.312031   53610 retry.go:31] will retry after 431.018789ms: waiting for machine to come up
	I1128 01:06:08.744643   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:08.745098   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:08.745119   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:08.745058   53610 retry.go:31] will retry after 525.151265ms: waiting for machine to come up
	I1128 01:06:09.271490   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:09.271953   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:09.271982   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:09.271897   53610 retry.go:31] will retry after 573.628287ms: waiting for machine to come up
	I1128 01:06:09.847220   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:09.847751   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:09.847779   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:09.847702   53610 retry.go:31] will retry after 758.942453ms: waiting for machine to come up
	I1128 01:06:10.348953   51982 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503249 seconds
	I1128 01:06:10.349150   51982 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 01:06:10.367214   51982 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 01:06:10.903506   51982 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 01:06:10.903749   51982 kubeadm.go:322] [mark-control-plane] Marking the node calico-167798 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 01:06:11.419680   51982 kubeadm.go:322] [bootstrap-token] Using token: 0ch9hb.q6i0nrfhjwtnjh4j
	I1128 01:06:11.421390   51982 out.go:204]   - Configuring RBAC rules ...
	I1128 01:06:11.421539   51982 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 01:06:11.426813   51982 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 01:06:11.435844   51982 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 01:06:11.440518   51982 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 01:06:11.444584   51982 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 01:06:11.452159   51982 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 01:06:11.467048   51982 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 01:06:11.741929   51982 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 01:06:11.860450   51982 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 01:06:11.860499   51982 kubeadm.go:322] 
	I1128 01:06:11.860598   51982 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 01:06:11.860614   51982 kubeadm.go:322] 
	I1128 01:06:11.860710   51982 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 01:06:11.860718   51982 kubeadm.go:322] 
	I1128 01:06:11.860782   51982 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 01:06:11.860897   51982 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 01:06:11.860958   51982 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 01:06:11.860968   51982 kubeadm.go:322] 
	I1128 01:06:11.861051   51982 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 01:06:11.861065   51982 kubeadm.go:322] 
	I1128 01:06:11.861147   51982 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 01:06:11.861167   51982 kubeadm.go:322] 
	I1128 01:06:11.861245   51982 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 01:06:11.861361   51982 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 01:06:11.861471   51982 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 01:06:11.861484   51982 kubeadm.go:322] 
	I1128 01:06:11.861611   51982 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 01:06:11.861714   51982 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 01:06:11.861727   51982 kubeadm.go:322] 
	I1128 01:06:11.861825   51982 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0ch9hb.q6i0nrfhjwtnjh4j \
	I1128 01:06:11.861981   51982 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 01:06:11.862011   51982 kubeadm.go:322] 	--control-plane 
	I1128 01:06:11.862021   51982 kubeadm.go:322] 
	I1128 01:06:11.862117   51982 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 01:06:11.862128   51982 kubeadm.go:322] 
	I1128 01:06:11.862224   51982 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0ch9hb.q6i0nrfhjwtnjh4j \
	I1128 01:06:11.862401   51982 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 01:06:11.862549   51982 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 01:06:11.862565   51982 cni.go:84] Creating CNI manager for "calico"
	I1128 01:06:11.864569   51982 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1128 01:06:11.866443   51982 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 01:06:11.866462   51982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (244810 bytes)
	I1128 01:06:11.923633   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 01:06:13.892433   51982 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.968758397s)
	I1128 01:06:13.892481   51982 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 01:06:13.892586   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:13.892589   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=calico-167798 minikube.k8s.io/updated_at=2023_11_28T01_06_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:13.915450   51982 ops.go:34] apiserver oom_adj: -16
	I1128 01:06:14.037355   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:14.168119   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:10.607986   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:10.608405   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:10.608433   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:10.608352   53610 retry.go:31] will retry after 1.269720426s: waiting for machine to come up
	I1128 01:06:11.879717   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:11.880255   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:11.880301   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:11.880189   53610 retry.go:31] will retry after 1.339259862s: waiting for machine to come up
	I1128 01:06:13.221371   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:13.221850   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:13.221884   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:13.221809   53610 retry.go:31] will retry after 2.084895337s: waiting for machine to come up
	I1128 01:06:15.308485   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:15.308978   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:15.309006   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:15.308930   53610 retry.go:31] will retry after 2.833457992s: waiting for machine to come up
	I1128 01:06:14.764224   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:15.264837   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:15.764322   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:16.264869   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:16.764154   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:17.264106   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:17.764564   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:18.264282   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:18.764443   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:19.264835   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:18.145805   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:18.146323   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:18.146346   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:18.146281   53610 retry.go:31] will retry after 3.404760777s: waiting for machine to come up
	I1128 01:06:19.763940   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:20.264723   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:20.764795   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:21.263965   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:21.764899   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:22.263962   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:22.764706   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:23.264509   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:23.764342   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:24.264183   51982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 01:06:24.375795   51982 kubeadm.go:1081] duration metric: took 10.483276005s to wait for elevateKubeSystemPrivileges.
	I1128 01:06:24.375838   51982 kubeadm.go:406] StartCluster complete in 25.725982971s
	I1128 01:06:24.375860   51982 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:06:24.375940   51982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 01:06:24.377327   51982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:06:24.377531   51982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 01:06:24.377649   51982 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 01:06:24.377721   51982 addons.go:69] Setting storage-provisioner=true in profile "calico-167798"
	I1128 01:06:24.377736   51982 addons.go:69] Setting default-storageclass=true in profile "calico-167798"
	I1128 01:06:24.377761   51982 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-167798"
	I1128 01:06:24.377739   51982 config.go:182] Loaded profile config "calico-167798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:06:24.377743   51982 addons.go:231] Setting addon storage-provisioner=true in "calico-167798"
	I1128 01:06:24.377939   51982 host.go:66] Checking if "calico-167798" exists ...
	I1128 01:06:24.378274   51982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:06:24.378307   51982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:06:24.378316   51982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:06:24.378345   51982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:06:24.394216   51982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45887
	I1128 01:06:24.394671   51982 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:06:24.395151   51982 main.go:141] libmachine: Using API Version  1
	I1128 01:06:24.395176   51982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:06:24.395551   51982 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:06:24.395989   51982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34749
	I1128 01:06:24.396162   51982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:06:24.396199   51982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:06:24.396427   51982 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:06:24.396876   51982 main.go:141] libmachine: Using API Version  1
	I1128 01:06:24.396899   51982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:06:24.397311   51982 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:06:24.397543   51982 main.go:141] libmachine: (calico-167798) Calling .GetState
	I1128 01:06:24.400625   51982 addons.go:231] Setting addon default-storageclass=true in "calico-167798"
	I1128 01:06:24.400663   51982 host.go:66] Checking if "calico-167798" exists ...
	I1128 01:06:24.401132   51982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:06:24.401177   51982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:06:24.407090   51982 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-167798" context rescaled to 1 replicas
	I1128 01:06:24.407122   51982 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.133 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 01:06:24.408941   51982 out.go:177] * Verifying Kubernetes components...
	I1128 01:06:24.410713   51982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 01:06:24.412560   51982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I1128 01:06:24.413110   51982 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:06:24.413554   51982 main.go:141] libmachine: Using API Version  1
	I1128 01:06:24.413575   51982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:06:24.413978   51982 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:06:24.414228   51982 main.go:141] libmachine: (calico-167798) Calling .GetState
	I1128 01:06:24.416143   51982 main.go:141] libmachine: (calico-167798) Calling .DriverName
	I1128 01:06:24.418243   51982 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 01:06:24.416688   51982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I1128 01:06:24.420175   51982 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 01:06:24.420189   51982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 01:06:24.420208   51982 main.go:141] libmachine: (calico-167798) Calling .GetSSHHostname
	I1128 01:06:24.420729   51982 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:06:24.421455   51982 main.go:141] libmachine: Using API Version  1
	I1128 01:06:24.421495   51982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:06:24.422113   51982 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:06:24.422821   51982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:06:24.422893   51982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:06:24.424386   51982 main.go:141] libmachine: (calico-167798) DBG | domain calico-167798 has defined MAC address 52:54:00:b0:19:64 in network mk-calico-167798
	I1128 01:06:24.424414   51982 main.go:141] libmachine: (calico-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:19:64", ip: ""} in network mk-calico-167798: {Iface:virbr2 ExpiryTime:2023-11-28 02:05:41 +0000 UTC Type:0 Mac:52:54:00:b0:19:64 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:calico-167798 Clientid:01:52:54:00:b0:19:64}
	I1128 01:06:24.424449   51982 main.go:141] libmachine: (calico-167798) DBG | domain calico-167798 has defined IP address 192.168.50.133 and MAC address 52:54:00:b0:19:64 in network mk-calico-167798
	I1128 01:06:24.424506   51982 main.go:141] libmachine: (calico-167798) Calling .GetSSHPort
	I1128 01:06:24.424701   51982 main.go:141] libmachine: (calico-167798) Calling .GetSSHKeyPath
	I1128 01:06:24.424892   51982 main.go:141] libmachine: (calico-167798) Calling .GetSSHUsername
	I1128 01:06:24.425068   51982 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/calico-167798/id_rsa Username:docker}
	I1128 01:06:24.438964   51982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I1128 01:06:24.439617   51982 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:06:24.440082   51982 main.go:141] libmachine: Using API Version  1
	I1128 01:06:24.440110   51982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:06:24.440477   51982 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:06:24.440677   51982 main.go:141] libmachine: (calico-167798) Calling .GetState
	I1128 01:06:24.442534   51982 main.go:141] libmachine: (calico-167798) Calling .DriverName
	I1128 01:06:24.442806   51982 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 01:06:24.442838   51982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 01:06:24.442877   51982 main.go:141] libmachine: (calico-167798) Calling .GetSSHHostname
	I1128 01:06:24.445694   51982 main.go:141] libmachine: (calico-167798) DBG | domain calico-167798 has defined MAC address 52:54:00:b0:19:64 in network mk-calico-167798
	I1128 01:06:24.446142   51982 main.go:141] libmachine: (calico-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:19:64", ip: ""} in network mk-calico-167798: {Iface:virbr2 ExpiryTime:2023-11-28 02:05:41 +0000 UTC Type:0 Mac:52:54:00:b0:19:64 Iaid: IPaddr:192.168.50.133 Prefix:24 Hostname:calico-167798 Clientid:01:52:54:00:b0:19:64}
	I1128 01:06:24.446170   51982 main.go:141] libmachine: (calico-167798) DBG | domain calico-167798 has defined IP address 192.168.50.133 and MAC address 52:54:00:b0:19:64 in network mk-calico-167798
	I1128 01:06:24.446327   51982 main.go:141] libmachine: (calico-167798) Calling .GetSSHPort
	I1128 01:06:24.446518   51982 main.go:141] libmachine: (calico-167798) Calling .GetSSHKeyPath
	I1128 01:06:24.446703   51982 main.go:141] libmachine: (calico-167798) Calling .GetSSHUsername
	I1128 01:06:24.446851   51982 sshutil.go:53] new ssh client: &{IP:192.168.50.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/calico-167798/id_rsa Username:docker}
	I1128 01:06:21.552397   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:21.552821   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:21.552856   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:21.552745   53610 retry.go:31] will retry after 4.007286124s: waiting for machine to come up
	I1128 01:06:24.600938   51982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 01:06:24.640815   51982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 01:06:24.794899   51982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 01:06:24.795864   51982 node_ready.go:35] waiting up to 15m0s for node "calico-167798" to be "Ready" ...
	I1128 01:06:25.775027   51982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.174040643s)
	I1128 01:06:25.775089   51982 main.go:141] libmachine: Making call to close driver server
	I1128 01:06:25.775103   51982 main.go:141] libmachine: (calico-167798) Calling .Close
	I1128 01:06:25.775455   51982 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:06:25.775555   51982 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:06:25.775569   51982 main.go:141] libmachine: Making call to close driver server
	I1128 01:06:25.775587   51982 main.go:141] libmachine: (calico-167798) Calling .Close
	I1128 01:06:25.775532   51982 main.go:141] libmachine: (calico-167798) DBG | Closing plugin on server side
	I1128 01:06:25.775860   51982 main.go:141] libmachine: (calico-167798) DBG | Closing plugin on server side
	I1128 01:06:25.775890   51982 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:06:25.775905   51982 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:06:25.783022   51982 main.go:141] libmachine: Making call to close driver server
	I1128 01:06:25.783045   51982 main.go:141] libmachine: (calico-167798) Calling .Close
	I1128 01:06:25.783291   51982 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:06:25.783315   51982 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:06:25.783341   51982 main.go:141] libmachine: (calico-167798) DBG | Closing plugin on server side
	I1128 01:06:26.118492   51982 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.477631072s)
	I1128 01:06:26.118547   51982 main.go:141] libmachine: Making call to close driver server
	I1128 01:06:26.118569   51982 main.go:141] libmachine: (calico-167798) Calling .Close
	I1128 01:06:26.118492   51982 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.323550621s)
	I1128 01:06:26.118658   51982 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 01:06:26.118856   51982 main.go:141] libmachine: (calico-167798) DBG | Closing plugin on server side
	I1128 01:06:26.118905   51982 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:06:26.118918   51982 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:06:26.118929   51982 main.go:141] libmachine: Making call to close driver server
	I1128 01:06:26.118938   51982 main.go:141] libmachine: (calico-167798) Calling .Close
	I1128 01:06:26.119150   51982 main.go:141] libmachine: Successfully made call to close driver server
	I1128 01:06:26.119163   51982 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 01:06:26.119189   51982 main.go:141] libmachine: (calico-167798) DBG | Closing plugin on server side
	I1128 01:06:26.121062   51982 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1128 01:06:26.122900   51982 addons.go:502] enable addons completed in 1.745250208s: enabled=[default-storageclass storage-provisioner]
	I1128 01:06:27.043077   51982 node_ready.go:58] node "calico-167798" has status "Ready":"False"
	I1128 01:06:29.043682   51982 node_ready.go:58] node "calico-167798" has status "Ready":"False"
	I1128 01:06:25.561437   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:25.561881   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find current IP address of domain custom-flannel-167798 in network mk-custom-flannel-167798
	I1128 01:06:25.561912   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | I1128 01:06:25.561825   53610 retry.go:31] will retry after 5.183738008s: waiting for machine to come up
	I1128 01:06:30.747035   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:30.747514   53588 main.go:141] libmachine: (custom-flannel-167798) Found IP for machine: 192.168.61.204
	I1128 01:06:30.747547   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has current primary IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:30.747558   53588 main.go:141] libmachine: (custom-flannel-167798) Reserving static IP address...
	I1128 01:06:30.747864   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | unable to find host DHCP lease matching {name: "custom-flannel-167798", mac: "52:54:00:f0:f3:c0", ip: "192.168.61.204"} in network mk-custom-flannel-167798
	I1128 01:06:30.826902   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Getting to WaitForSSH function...
	I1128 01:06:30.826938   53588 main.go:141] libmachine: (custom-flannel-167798) Reserved static IP address: 192.168.61.204
	I1128 01:06:30.826989   53588 main.go:141] libmachine: (custom-flannel-167798) Waiting for SSH to be available...
	I1128 01:06:30.830239   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:30.830679   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:30.830707   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:30.830860   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Using SSH client type: external
	I1128 01:06:30.830888   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/id_rsa (-rw-------)
	I1128 01:06:30.830930   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 01:06:30.830944   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | About to run SSH command:
	I1128 01:06:30.830958   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | exit 0
	I1128 01:06:30.928716   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | SSH cmd err, output: <nil>: 
	I1128 01:06:30.929045   53588 main.go:141] libmachine: (custom-flannel-167798) KVM machine creation complete!
	I1128 01:06:30.929313   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetConfigRaw
	I1128 01:06:30.929932   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .DriverName
	I1128 01:06:30.930188   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .DriverName
	I1128 01:06:30.930394   53588 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1128 01:06:30.930418   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetState
	I1128 01:06:30.931743   53588 main.go:141] libmachine: Detecting operating system of created instance...
	I1128 01:06:30.931760   53588 main.go:141] libmachine: Waiting for SSH to be available...
	I1128 01:06:30.931770   53588 main.go:141] libmachine: Getting to WaitForSSH function...
	I1128 01:06:30.931780   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:30.934674   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:30.935075   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:30.935119   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:30.935201   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:30.935366   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:30.935513   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:30.935651   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:30.935850   53588 main.go:141] libmachine: Using SSH client type: native
	I1128 01:06:30.936222   53588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I1128 01:06:30.936237   53588 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1128 01:06:31.076557   53588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 01:06:31.076586   53588 main.go:141] libmachine: Detecting the provisioner...
	I1128 01:06:31.076597   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:31.079884   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.080303   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:31.080334   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.080505   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:31.080702   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.080881   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.081062   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:31.081237   53588 main.go:141] libmachine: Using SSH client type: native
	I1128 01:06:31.081584   53588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I1128 01:06:31.081600   53588 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1128 01:06:31.218197   53588 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g8be4f20-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1128 01:06:31.218314   53588 main.go:141] libmachine: found compatible host: buildroot
	I1128 01:06:31.218331   53588 main.go:141] libmachine: Provisioning with buildroot...
	I1128 01:06:31.218349   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetMachineName
	I1128 01:06:31.218633   53588 buildroot.go:166] provisioning hostname "custom-flannel-167798"
	I1128 01:06:31.218664   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetMachineName
	I1128 01:06:31.218856   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:31.221999   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.222435   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:31.222473   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.222653   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:31.222853   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.223002   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.223180   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:31.223364   53588 main.go:141] libmachine: Using SSH client type: native
	I1128 01:06:31.223677   53588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I1128 01:06:31.223706   53588 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-167798 && echo "custom-flannel-167798" | sudo tee /etc/hostname
	I1128 01:06:31.373139   53588 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-167798
	
	I1128 01:06:31.373172   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:31.376198   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.376690   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:31.376722   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.377007   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:31.377219   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.377371   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.377530   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:31.377677   53588 main.go:141] libmachine: Using SSH client type: native
	I1128 01:06:31.378158   53588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I1128 01:06:31.378181   53588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-167798' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-167798/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-167798' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 01:06:31.518753   53588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 01:06:31.518778   53588 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 01:06:31.518821   53588 buildroot.go:174] setting up certificates
	I1128 01:06:31.518838   53588 provision.go:83] configureAuth start
	I1128 01:06:31.518857   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetMachineName
	I1128 01:06:31.519181   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetIP
	I1128 01:06:31.522369   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.522733   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:31.522762   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.522926   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:31.525522   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.525931   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:31.525963   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.526117   53588 provision.go:138] copyHostCerts
	I1128 01:06:31.526172   53588 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 01:06:31.526185   53588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 01:06:31.526260   53588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 01:06:31.526393   53588 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 01:06:31.526408   53588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 01:06:31.526442   53588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 01:06:31.526523   53588 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 01:06:31.526533   53588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 01:06:31.526566   53588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 01:06:31.526648   53588 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-167798 san=[192.168.61.204 192.168.61.204 localhost 127.0.0.1 minikube custom-flannel-167798]
	I1128 01:06:31.645470   53588 provision.go:172] copyRemoteCerts
	I1128 01:06:31.645531   53588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 01:06:31.645562   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:31.648382   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.648810   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:31.648842   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.649033   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:31.649249   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.649408   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:31.649557   53588 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/id_rsa Username:docker}
	I1128 01:06:31.749510   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 01:06:31.780950   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 01:06:31.810068   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 01:06:31.832844   53588 provision.go:86] duration metric: configureAuth took 313.987536ms
	I1128 01:06:31.832885   53588 buildroot.go:189] setting minikube options for container-runtime
	I1128 01:06:31.833116   53588 config.go:182] Loaded profile config "custom-flannel-167798": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:06:31.833204   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:31.836056   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.836477   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:31.836508   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:31.836657   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:31.836877   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.837085   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:31.837237   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:31.837409   53588 main.go:141] libmachine: Using SSH client type: native
	I1128 01:06:31.837879   53588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I1128 01:06:31.837906   53588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 01:06:32.212527   53588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 01:06:32.212561   53588 main.go:141] libmachine: Checking connection to Docker...
	I1128 01:06:32.212574   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetURL
	I1128 01:06:32.214221   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | Using libvirt version 6000000
	I1128 01:06:32.217154   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.217643   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:32.217675   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.217996   53588 main.go:141] libmachine: Docker is up and running!
	I1128 01:06:32.218015   53588 main.go:141] libmachine: Reticulating splines...
	I1128 01:06:32.218023   53588 client.go:171] LocalClient.Create took 26.786416937s
	I1128 01:06:32.218053   53588 start.go:167] duration metric: libmachine.API.Create for "custom-flannel-167798" took 26.786484036s
	I1128 01:06:32.218065   53588 start.go:300] post-start starting for "custom-flannel-167798" (driver="kvm2")
	I1128 01:06:32.218076   53588 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 01:06:32.218102   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .DriverName
	I1128 01:06:32.218368   53588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 01:06:32.218403   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:32.221475   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.221896   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:32.221933   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.222063   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:32.222224   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:32.222398   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:32.222548   53588 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/id_rsa Username:docker}
	I1128 01:06:32.322073   53588 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 01:06:32.326328   53588 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 01:06:32.326357   53588 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 01:06:32.326425   53588 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 01:06:32.326524   53588 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 01:06:32.326639   53588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 01:06:32.335474   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 01:06:32.360499   53588 start.go:303] post-start completed in 142.419695ms
	I1128 01:06:32.360564   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetConfigRaw
	I1128 01:06:32.361299   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetIP
	I1128 01:06:32.364585   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.364988   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:32.365020   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.365260   53588 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/config.json ...
	I1128 01:06:32.365472   53588 start.go:128] duration metric: createHost completed in 26.953659302s
	I1128 01:06:32.365511   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:32.367748   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.368054   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:32.368084   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.368205   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:32.368413   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:32.368602   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:32.368770   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:32.368960   53588 main.go:141] libmachine: Using SSH client type: native
	I1128 01:06:32.369387   53588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I1128 01:06:32.369403   53588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 01:06:32.506291   53588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701133592.490385274
	
	I1128 01:06:32.506328   53588 fix.go:206] guest clock: 1701133592.490385274
	I1128 01:06:32.506338   53588 fix.go:219] Guest: 2023-11-28 01:06:32.490385274 +0000 UTC Remote: 2023-11-28 01:06:32.365487512 +0000 UTC m=+27.084036521 (delta=124.897762ms)
	I1128 01:06:32.506372   53588 fix.go:190] guest clock delta is within tolerance: 124.897762ms
	I1128 01:06:32.506391   53588 start.go:83] releasing machines lock for "custom-flannel-167798", held for 27.09470015s
	I1128 01:06:32.506420   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .DriverName
	I1128 01:06:32.506750   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetIP
	I1128 01:06:32.509653   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.510056   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:32.510089   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.510249   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .DriverName
	I1128 01:06:32.510752   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .DriverName
	I1128 01:06:32.510898   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .DriverName
	I1128 01:06:32.510971   53588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 01:06:32.511009   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:32.511105   53588 ssh_runner.go:195] Run: cat /version.json
	I1128 01:06:32.511135   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHHostname
	I1128 01:06:32.513824   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.514163   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:32.514188   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.514213   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.514368   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:32.514533   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:32.514678   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:32.514699   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:32.514700   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:32.514855   53588 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/id_rsa Username:docker}
	I1128 01:06:32.514871   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHPort
	I1128 01:06:32.515087   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHKeyPath
	I1128 01:06:32.515220   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetSSHUsername
	I1128 01:06:32.515323   53588 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/custom-flannel-167798/id_rsa Username:docker}
	I1128 01:06:32.605724   53588 ssh_runner.go:195] Run: systemctl --version
	I1128 01:06:32.631782   53588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 01:06:32.806365   53588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 01:06:32.812931   53588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 01:06:32.813010   53588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 01:06:32.827813   53588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 01:06:32.827840   53588 start.go:472] detecting cgroup driver to use...
	I1128 01:06:32.827907   53588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 01:06:32.846145   53588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 01:06:32.861814   53588 docker.go:203] disabling cri-docker service (if available) ...
	I1128 01:06:32.861903   53588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 01:06:32.875123   53588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 01:06:32.890870   53588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 01:06:33.009184   53588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 01:06:33.150978   53588 docker.go:219] disabling docker service ...
	I1128 01:06:33.151071   53588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 01:06:33.169465   53588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 01:06:33.183897   53588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 01:06:33.318109   53588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 01:06:33.458241   53588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 01:06:33.475086   53588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 01:06:33.493242   53588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 01:06:33.493314   53588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:06:33.504062   53588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 01:06:33.504132   53588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:06:33.514308   53588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:06:33.524480   53588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 01:06:33.534306   53588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 01:06:33.545331   53588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 01:06:33.555036   53588 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 01:06:33.555111   53588 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 01:06:33.569216   53588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 01:06:33.578442   53588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 01:06:33.703666   53588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 01:06:33.877862   53588 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 01:06:33.877933   53588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 01:06:33.883282   53588 start.go:540] Will wait 60s for crictl version
	I1128 01:06:33.883345   53588 ssh_runner.go:195] Run: which crictl
	I1128 01:06:33.887411   53588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 01:06:33.923342   53588 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 01:06:33.923415   53588 ssh_runner.go:195] Run: crio --version
	I1128 01:06:33.978252   53588 ssh_runner.go:195] Run: crio --version
	I1128 01:06:34.098913   53588 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 01:06:31.537981   51982 node_ready.go:58] node "calico-167798" has status "Ready":"False"
	I1128 01:06:33.538387   51982 node_ready.go:58] node "calico-167798" has status "Ready":"False"
	I1128 01:06:34.100423   53588 main.go:141] libmachine: (custom-flannel-167798) Calling .GetIP
	I1128 01:06:34.103665   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:34.104052   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:f3:c0", ip: ""} in network mk-custom-flannel-167798: {Iface:virbr4 ExpiryTime:2023-11-28 02:06:22 +0000 UTC Type:0 Mac:52:54:00:f0:f3:c0 Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:custom-flannel-167798 Clientid:01:52:54:00:f0:f3:c0}
	I1128 01:06:34.104080   53588 main.go:141] libmachine: (custom-flannel-167798) DBG | domain custom-flannel-167798 has defined IP address 192.168.61.204 and MAC address 52:54:00:f0:f3:c0 in network mk-custom-flannel-167798
	I1128 01:06:34.104278   53588 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1128 01:06:34.109653   53588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 01:06:34.123844   53588 localpath.go:92] copying /home/jenkins/minikube-integration/17206-4749/.minikube/client.crt -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt
	I1128 01:06:34.124032   53588 localpath.go:117] copying /home/jenkins/minikube-integration/17206-4749/.minikube/client.key -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.key
	I1128 01:06:34.124171   53588 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 01:06:34.124252   53588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 01:06:34.159351   53588 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 01:06:34.159417   53588 ssh_runner.go:195] Run: which lz4
	I1128 01:06:34.163375   53588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 01:06:34.167750   53588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 01:06:34.167777   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 01:06:35.538661   51982 node_ready.go:58] node "calico-167798" has status "Ready":"False"
	I1128 01:06:36.038747   51982 node_ready.go:49] node "calico-167798" has status "Ready":"True"
	I1128 01:06:36.038765   51982 node_ready.go:38] duration metric: took 11.242873534s waiting for node "calico-167798" to be "Ready" ...
	I1128 01:06:36.038773   51982 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 01:06:36.056314   51982 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace to be "Ready" ...
	I1128 01:06:38.097207   51982 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace has status "Ready":"False"
	I1128 01:06:36.035789   53588 crio.go:444] Took 1.872451 seconds to copy over tarball
	I1128 01:06:36.035856   53588 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 01:06:39.129703   53588 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093818302s)
	I1128 01:06:39.129730   53588 crio.go:451] Took 3.093919 seconds to extract the tarball
	I1128 01:06:39.129743   53588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 01:06:39.190050   53588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 01:06:39.270387   53588 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 01:06:39.270412   53588 cache_images.go:84] Images are preloaded, skipping loading
	I1128 01:06:39.270482   53588 ssh_runner.go:195] Run: crio config
	I1128 01:06:39.351269   53588 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1128 01:06:39.351310   53588 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 01:06:39.351329   53588 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.204 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-167798 NodeName:custom-flannel-167798 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 01:06:39.351492   53588 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-167798"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 01:06:39.351599   53588 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=custom-flannel-167798 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-167798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:}
	I1128 01:06:39.351682   53588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 01:06:39.363912   53588 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 01:06:39.363983   53588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 01:06:39.373999   53588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1128 01:06:39.391351   53588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 01:06:39.409189   53588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1128 01:06:39.426204   53588 ssh_runner.go:195] Run: grep 192.168.61.204	control-plane.minikube.internal$ /etc/hosts
	I1128 01:06:39.431284   53588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 01:06:39.445193   53588 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798 for IP: 192.168.61.204
	I1128 01:06:39.445223   53588 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:06:39.445369   53588 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 01:06:39.445419   53588 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 01:06:39.445520   53588 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.key
	I1128 01:06:39.445554   53588 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.key.9889daca
	I1128 01:06:39.445573   53588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.crt.9889daca with IP's: [192.168.61.204 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 01:06:39.528682   53588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.crt.9889daca ...
	I1128 01:06:39.528712   53588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.crt.9889daca: {Name:mk8efb54502818cf4cca6e83baa43fbdc9bf9482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:06:39.528899   53588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.key.9889daca ...
	I1128 01:06:39.528917   53588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.key.9889daca: {Name:mk99a15d9d5bcf4dffd60fb65f7501f618d8396c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:06:39.529006   53588 certs.go:337] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.crt.9889daca -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.crt
	I1128 01:06:39.529083   53588 certs.go:341] copying /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.key.9889daca -> /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.key
	I1128 01:06:39.529156   53588 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/proxy-client.key
	I1128 01:06:39.529182   53588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/proxy-client.crt with IP's: []
	I1128 01:06:39.669388   53588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/proxy-client.crt ...
	I1128 01:06:39.669418   53588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/proxy-client.crt: {Name:mkfff3c9c5fe2aeca4b87782d064f6e6a076f9be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:06:39.669593   53588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/proxy-client.key ...
	I1128 01:06:39.669610   53588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/proxy-client.key: {Name:mk524cefb7b424728cc996cc3f948b6fa77af593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:06:39.669786   53588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 01:06:39.669875   53588 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 01:06:39.669896   53588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 01:06:39.669930   53588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 01:06:39.669964   53588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 01:06:39.669998   53588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 01:06:39.670067   53588 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 01:06:39.670706   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 01:06:39.699628   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 01:06:39.728121   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 01:06:39.755640   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 01:06:39.784037   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 01:06:39.813389   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 01:06:39.841945   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 01:06:39.867017   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 01:06:39.892853   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 01:06:39.921681   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 01:06:39.950394   53588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 01:06:39.978583   53588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 01:06:39.996883   53588 ssh_runner.go:195] Run: openssl version
	I1128 01:06:40.002560   53588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 01:06:40.012440   53588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 01:06:40.017284   53588 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 01:06:40.017357   53588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 01:06:40.023154   53588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 01:06:40.033918   53588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 01:06:40.043948   53588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 01:06:40.048893   53588 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 01:06:40.048983   53588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 01:06:40.054828   53588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 01:06:40.065784   53588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 01:06:40.076483   53588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:06:40.081638   53588 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:06:40.081703   53588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 01:06:40.087582   53588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 01:06:40.097673   53588 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 01:06:40.102767   53588 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 01:06:40.102823   53588 kubeadm.go:404] StartCluster: {Name:custom-flannel-167798 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:custom-flannel-167798 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.204 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 01:06:40.102892   53588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 01:06:40.102927   53588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 01:06:40.148669   53588 cri.go:89] found id: ""
	I1128 01:06:40.148745   53588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 01:06:40.160393   53588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 01:06:40.169892   53588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 01:06:40.180658   53588 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 01:06:40.180717   53588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 01:06:40.243079   53588 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 01:06:40.243304   53588 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 01:06:40.373637   53588 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 01:06:40.461101   53588 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 01:06:40.461273   53588 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 01:06:40.646030   53588 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 01:06:40.740002   53588 out.go:204]   - Generating certificates and keys ...
	I1128 01:06:40.740114   53588 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 01:06:40.740230   53588 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 01:06:40.923949   53588 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 01:06:41.289178   53588 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 01:06:41.397295   53588 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 01:06:41.609973   53588 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 01:06:41.742077   53588 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 01:06:41.742311   53588 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-167798 localhost] and IPs [192.168.61.204 127.0.0.1 ::1]
	I1128 01:06:41.971892   53588 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 01:06:41.972239   53588 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-167798 localhost] and IPs [192.168.61.204 127.0.0.1 ::1]
	I1128 01:06:42.122898   53588 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 01:06:42.242242   53588 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 01:06:42.406510   53588 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 01:06:42.406873   53588 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 01:06:42.474643   53588 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 01:06:42.556218   53588 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 01:06:42.757717   53588 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 01:06:43.063016   53588 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 01:06:43.063584   53588 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 01:06:43.067272   53588 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 01:06:40.245110   51982 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace has status "Ready":"False"
	I1128 01:06:42.597585   51982 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace has status "Ready":"False"
	I1128 01:06:43.069192   53588 out.go:204]   - Booting up control plane ...
	I1128 01:06:43.069302   53588 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 01:06:43.069736   53588 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 01:06:43.070780   53588 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 01:06:43.086007   53588 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 01:06:43.086947   53588 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 01:06:43.087065   53588 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 01:06:43.206293   53588 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 01:06:44.599827   51982 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace has status "Ready":"False"
	I1128 01:06:47.111801   51982 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace has status "Ready":"False"
	I1128 01:06:51.709878   53588 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504329 seconds
	I1128 01:06:51.710052   53588 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 01:06:51.727313   53588 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 01:06:52.270419   53588 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 01:06:52.270674   53588 kubeadm.go:322] [mark-control-plane] Marking the node custom-flannel-167798 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 01:06:52.787479   53588 kubeadm.go:322] [bootstrap-token] Using token: ex453b.bg4w7t9s5wsfyz6k
	I1128 01:06:52.789160   53588 out.go:204]   - Configuring RBAC rules ...
	I1128 01:06:52.789270   53588 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 01:06:52.795520   53588 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 01:06:52.807446   53588 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 01:06:52.811883   53588 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 01:06:52.819116   53588 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 01:06:52.828818   53588 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 01:06:52.846973   53588 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 01:06:53.147715   53588 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 01:06:53.204938   53588 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 01:06:53.206078   53588 kubeadm.go:322] 
	I1128 01:06:53.206170   53588 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 01:06:53.206184   53588 kubeadm.go:322] 
	I1128 01:06:53.206274   53588 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 01:06:53.206299   53588 kubeadm.go:322] 
	I1128 01:06:53.206348   53588 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 01:06:53.206421   53588 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 01:06:53.206491   53588 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 01:06:53.206515   53588 kubeadm.go:322] 
	I1128 01:06:53.206589   53588 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 01:06:53.206599   53588 kubeadm.go:322] 
	I1128 01:06:53.206677   53588 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 01:06:53.206688   53588 kubeadm.go:322] 
	I1128 01:06:53.206755   53588 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 01:06:53.206869   53588 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 01:06:53.206982   53588 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 01:06:53.206999   53588 kubeadm.go:322] 
	I1128 01:06:53.207108   53588 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 01:06:53.207221   53588 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 01:06:53.207234   53588 kubeadm.go:322] 
	I1128 01:06:53.207349   53588 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ex453b.bg4w7t9s5wsfyz6k \
	I1128 01:06:53.207496   53588 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 01:06:53.207531   53588 kubeadm.go:322] 	--control-plane 
	I1128 01:06:53.207546   53588 kubeadm.go:322] 
	I1128 01:06:53.207672   53588 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 01:06:53.207681   53588 kubeadm.go:322] 
	I1128 01:06:53.207799   53588 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ex453b.bg4w7t9s5wsfyz6k \
	I1128 01:06:53.207961   53588 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 01:06:53.208168   53588 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 01:06:53.208207   53588 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1128 01:06:53.210265   53588 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1128 01:06:49.597305   51982 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace has status "Ready":"False"
	I1128 01:06:51.597799   51982 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace has status "Ready":"False"
	I1128 01:06:54.100021   51982 pod_ready.go:102] pod "calico-kube-controllers-558d465845-9r5tz" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:43:49 UTC, ends at Tue 2023-11-28 01:06:55 UTC. --
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.665894496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133615665878979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8b8cf3f3-1e1f-416d-be35-46720e3693a1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.666608209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1d7f7a6e-f2a1-49d6-a531-4fa4be985e17 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.666655032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1d7f7a6e-f2a1-49d6-a531-4fa4be985e17 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.666809304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00cd4d8553882711c7182818593a636c69b27b4ea9eac918d5f368c1b97a24a8,PodSandboxId:696e3b6bab7a17aa752ff819247d0c210d64838951c2501a0af365a7149040d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701132275036064898,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d5410e-5ec3-42c3-a64c-9d6034cc2479,},Annotations:map[string]string{io.kubernetes.container.hash: ed439777,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b,PodSandboxId:db173541c1f693c36f882b61c72100092aae1f213165cfc39180293beaf46f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132272143130271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n7qpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d027f799-6ced-488e-a4f7-6df351193c64,},Annotations:map[string]string{io.kubernetes.container.hash: 4bcc6111,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc,PodSandboxId:7767398bc7e78b85ca7deaa9630eacc36762317b3c7c37a9efcaee3340cddeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132266343876864,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f1e6e7d1-86aa-403c-b753-2b94beb7d7b1,},Annotations:map[string]string{io.kubernetes.container.hash: db1f1b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55,PodSandboxId:9c7ea99fb0fcc9748e79f1d7f62b930f78ab4a1ecc7542817b70d7de0cbb79fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132264247661359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2sfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d92ac1f-4070-4000-9bc6-3d277e0c8c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 49ba80c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193,PodSandboxId:5994df2943032a79d68e673229bf154c7b911bf283ada2e1e3e144bfdf34b0ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132257298128614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 881a15d8e5113e1b5a7cb1c587b7f2ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c,PodSandboxId:04d169305b4e28a6765e10820c14fec3f76d3c01542f0166d6479065f913685a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132257076988000,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8a335211a74893a9e0a2fbb3b79b67,},
Annotations:map[string]string{io.kubernetes.container.hash: 38744c33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64,PodSandboxId:c4199db689e7ed22a8fe0bfa3bfdfeeb3cfff7df3a398b800d8cfb8e9dceec64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132256720175268,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e59a0e6ef976f3f43fd190f644b8b03a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6,PodSandboxId:6b016cf4659b87726f157a5742ce570c6af007503fae7476c146a8c175d94eb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132256527580043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
64434154c26454bb0c93b2b163c531da,},Annotations:map[string]string{io.kubernetes.container.hash: 94ce39d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1d7f7a6e-f2a1-49d6-a531-4fa4be985e17 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.708359180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dcbb157e-de27-4c24-b587-60a8d1f060fa name=/runtime.v1.RuntimeService/Version
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.708475337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dcbb157e-de27-4c24-b587-60a8d1f060fa name=/runtime.v1.RuntimeService/Version
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.710374068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ddfc61d2-6ff4-4231-87f9-db84309289bb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.710829667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133615710814268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ddfc61d2-6ff4-4231-87f9-db84309289bb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.711827474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e497a3cd-d21f-4dc1-848e-84de9845b37b name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.711898749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e497a3cd-d21f-4dc1-848e-84de9845b37b name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.712197944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00cd4d8553882711c7182818593a636c69b27b4ea9eac918d5f368c1b97a24a8,PodSandboxId:696e3b6bab7a17aa752ff819247d0c210d64838951c2501a0af365a7149040d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701132275036064898,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d5410e-5ec3-42c3-a64c-9d6034cc2479,},Annotations:map[string]string{io.kubernetes.container.hash: ed439777,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b,PodSandboxId:db173541c1f693c36f882b61c72100092aae1f213165cfc39180293beaf46f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132272143130271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n7qpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d027f799-6ced-488e-a4f7-6df351193c64,},Annotations:map[string]string{io.kubernetes.container.hash: 4bcc6111,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc,PodSandboxId:7767398bc7e78b85ca7deaa9630eacc36762317b3c7c37a9efcaee3340cddeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132266343876864,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f1e6e7d1-86aa-403c-b753-2b94beb7d7b1,},Annotations:map[string]string{io.kubernetes.container.hash: db1f1b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55,PodSandboxId:9c7ea99fb0fcc9748e79f1d7f62b930f78ab4a1ecc7542817b70d7de0cbb79fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132264247661359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2sfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d92ac1f-4070-4000-9bc6-3d277e0c8c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 49ba80c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193,PodSandboxId:5994df2943032a79d68e673229bf154c7b911bf283ada2e1e3e144bfdf34b0ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132257298128614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 881a15d8e5113e1b5a7cb1c587b7f2ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c,PodSandboxId:04d169305b4e28a6765e10820c14fec3f76d3c01542f0166d6479065f913685a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132257076988000,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8a335211a74893a9e0a2fbb3b79b67,},
Annotations:map[string]string{io.kubernetes.container.hash: 38744c33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64,PodSandboxId:c4199db689e7ed22a8fe0bfa3bfdfeeb3cfff7df3a398b800d8cfb8e9dceec64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132256720175268,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e59a0e6ef976f3f43fd190f644b8b03a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6,PodSandboxId:6b016cf4659b87726f157a5742ce570c6af007503fae7476c146a8c175d94eb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132256527580043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
64434154c26454bb0c93b2b163c531da,},Annotations:map[string]string{io.kubernetes.container.hash: 94ce39d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e497a3cd-d21f-4dc1-848e-84de9845b37b name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.759471180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2400a0e5-b391-40df-b51f-3c03090312ab name=/runtime.v1.RuntimeService/Version
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.759572825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2400a0e5-b391-40df-b51f-3c03090312ab name=/runtime.v1.RuntimeService/Version
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.765974172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=873c5ce0-c0c3-4e51-9773-e055ec6a7cfa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.766483456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133615766469070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=873c5ce0-c0c3-4e51-9773-e055ec6a7cfa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.768525199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=26ad3b03-8fe8-462a-8e14-5ba0a17d59af name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.768594387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=26ad3b03-8fe8-462a-8e14-5ba0a17d59af name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.768898341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00cd4d8553882711c7182818593a636c69b27b4ea9eac918d5f368c1b97a24a8,PodSandboxId:696e3b6bab7a17aa752ff819247d0c210d64838951c2501a0af365a7149040d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701132275036064898,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d5410e-5ec3-42c3-a64c-9d6034cc2479,},Annotations:map[string]string{io.kubernetes.container.hash: ed439777,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b,PodSandboxId:db173541c1f693c36f882b61c72100092aae1f213165cfc39180293beaf46f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132272143130271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n7qpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d027f799-6ced-488e-a4f7-6df351193c64,},Annotations:map[string]string{io.kubernetes.container.hash: 4bcc6111,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc,PodSandboxId:7767398bc7e78b85ca7deaa9630eacc36762317b3c7c37a9efcaee3340cddeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132266343876864,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f1e6e7d1-86aa-403c-b753-2b94beb7d7b1,},Annotations:map[string]string{io.kubernetes.container.hash: db1f1b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55,PodSandboxId:9c7ea99fb0fcc9748e79f1d7f62b930f78ab4a1ecc7542817b70d7de0cbb79fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132264247661359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2sfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d92ac1f-4070-4000-9bc6-3d277e0c8c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 49ba80c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193,PodSandboxId:5994df2943032a79d68e673229bf154c7b911bf283ada2e1e3e144bfdf34b0ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132257298128614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 881a15d8e5113e1b5a7cb1c587b7f2ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c,PodSandboxId:04d169305b4e28a6765e10820c14fec3f76d3c01542f0166d6479065f913685a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132257076988000,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8a335211a74893a9e0a2fbb3b79b67,},
Annotations:map[string]string{io.kubernetes.container.hash: 38744c33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64,PodSandboxId:c4199db689e7ed22a8fe0bfa3bfdfeeb3cfff7df3a398b800d8cfb8e9dceec64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132256720175268,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e59a0e6ef976f3f43fd190f644b8b03a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6,PodSandboxId:6b016cf4659b87726f157a5742ce570c6af007503fae7476c146a8c175d94eb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132256527580043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
64434154c26454bb0c93b2b163c531da,},Annotations:map[string]string{io.kubernetes.container.hash: 94ce39d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=26ad3b03-8fe8-462a-8e14-5ba0a17d59af name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.813678182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=da1eda35-a8e2-4bf7-964b-5d59e7bad8cf name=/runtime.v1.RuntimeService/Version
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.813761819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=da1eda35-a8e2-4bf7-964b-5d59e7bad8cf name=/runtime.v1.RuntimeService/Version
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.815821952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9635655b-5bab-470e-bce1-69eda83582ff name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.816474187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133615816420348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9635655b-5bab-470e-bce1-69eda83582ff name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.817197064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0d216e9d-fd14-48c7-b15c-251bbb944586 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.817246487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0d216e9d-fd14-48c7-b15c-251bbb944586 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:06:55 default-k8s-diff-port-488423 crio[710]: time="2023-11-28 01:06:55.817461249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00cd4d8553882711c7182818593a636c69b27b4ea9eac918d5f368c1b97a24a8,PodSandboxId:696e3b6bab7a17aa752ff819247d0c210d64838951c2501a0af365a7149040d1,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701132275036064898,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 95d5410e-5ec3-42c3-a64c-9d6034cc2479,},Annotations:map[string]string{io.kubernetes.container.hash: ed439777,io.kubernetes.container.restartCount: 1,io.kubernetes
.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b,PodSandboxId:db173541c1f693c36f882b61c72100092aae1f213165cfc39180293beaf46f63,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701132272143130271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-n7qpb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d027f799-6ced-488e-a4f7-6df351193c64,},Annotations:map[string]string{io.kubernetes.container.hash: 4bcc6111,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc,PodSandboxId:7767398bc7e78b85ca7deaa9630eacc36762317b3c7c37a9efcaee3340cddeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132266343876864,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: f1e6e7d1-86aa-403c-b753-2b94beb7d7b1,},Annotations:map[string]string{io.kubernetes.container.hash: db1f1b6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55,PodSandboxId:9c7ea99fb0fcc9748e79f1d7f62b930f78ab4a1ecc7542817b70d7de0cbb79fa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701132264247661359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2sfbm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d92ac1f-4070-4000-9bc6-3d277e0c8c6e,},Annotations:map[string]string{io.kubernetes.container.hash: 49ba80c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193,PodSandboxId:5994df2943032a79d68e673229bf154c7b911bf283ada2e1e3e144bfdf34b0ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701132257298128614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 881a15d8e5113e1b5a7cb1c587b7f2ea,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c,PodSandboxId:04d169305b4e28a6765e10820c14fec3f76d3c01542f0166d6479065f913685a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701132257076988000,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b8a335211a74893a9e0a2fbb3b79b67,},
Annotations:map[string]string{io.kubernetes.container.hash: 38744c33,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64,PodSandboxId:c4199db689e7ed22a8fe0bfa3bfdfeeb3cfff7df3a398b800d8cfb8e9dceec64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701132256720175268,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e59a0e6ef976f3f43fd190f644b8b03a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6,PodSandboxId:6b016cf4659b87726f157a5742ce570c6af007503fae7476c146a8c175d94eb1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701132256527580043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-488423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
64434154c26454bb0c93b2b163c531da,},Annotations:map[string]string{io.kubernetes.container.hash: 94ce39d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0d216e9d-fd14-48c7-b15c-251bbb944586 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00cd4d8553882       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   696e3b6bab7a1       busybox
	02084fe546b60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      22 minutes ago      Running             coredns                   1                   db173541c1f69       coredns-5dd5756b68-n7qpb
	fe8f8f443aabe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       1                   7767398bc7e78       storage-provisioner
	2d6fefc920655       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      22 minutes ago      Running             kube-proxy                1                   9c7ea99fb0fcc       kube-proxy-2sfbm
	032c85dd651d9       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      22 minutes ago      Running             kube-scheduler            1                   5994df2943032       kube-scheduler-default-k8s-diff-port-488423
	0c0deffc33b75       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      22 minutes ago      Running             etcd                      1                   04d169305b4e2       etcd-default-k8s-diff-port-488423
	cdf1978d16c71       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      22 minutes ago      Running             kube-controller-manager   1                   c4199db689e7e       kube-controller-manager-default-k8s-diff-port-488423
	a108c17df3e3a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      22 minutes ago      Running             kube-apiserver            1                   6b016cf4659b8       kube-apiserver-default-k8s-diff-port-488423
	
	* 
	* ==> coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55794 - 32904 "HINFO IN 6344863561981079725.1139160491145542212. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023820628s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-488423
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-488423
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=default-k8s-diff-port-488423
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_37_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:37:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-488423
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 01:06:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 01:05:19 +0000   Tue, 28 Nov 2023 00:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 01:05:19 +0000   Tue, 28 Nov 2023 00:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 01:05:19 +0000   Tue, 28 Nov 2023 00:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 01:05:19 +0000   Tue, 28 Nov 2023 00:44:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.242
	  Hostname:    default-k8s-diff-port-488423
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6327e8bb62834ea9b622947f0d7df4bd
	  System UUID:                6327e8bb-6283-4ea9-b622-947f0d7df4bd
	  Boot ID:                    380a6c4b-cffa-42f5-b658-f63a7c6bc5e6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-n7qpb                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-488423                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-488423             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-488423    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-2sfbm                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-488423             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-fk9xx                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-488423 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-488423 event: Registered Node default-k8s-diff-port-488423 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-488423 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-488423 event: Registered Node default-k8s-diff-port-488423 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 00:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076626] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.622917] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.553999] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135713] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.650058] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.497498] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.105983] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.147401] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.104573] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.240805] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[Nov28 00:44] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[ +16.342613] kauditd_printk_skb: 19 callbacks suppressed
	[Nov28 01:06] hrtimer: interrupt took 3249946 ns
	
	* 
	* ==> etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] <==
	* {"level":"warn","ts":"2023-11-28T00:44:26.853259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.476845Z","time spent":"376.403982ms","remote":"127.0.0.1:41848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":218,"request content":"key:\"/registry/serviceaccounts/kube-system/node-controller\" "}
	{"level":"warn","ts":"2023-11-28T00:44:26.853231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.817503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2023-11-28T00:44:26.854108Z","caller":"traceutil/trace.go:171","msg":"trace[376844974] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:531; }","duration":"322.689883ms","start":"2023-11-28T00:44:26.531407Z","end":"2023-11-28T00:44:26.854097Z","steps":["trace[376844974] 'agreement among raft nodes before linearized reading'  (duration: 321.794976ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.854159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.53139Z","time spent":"322.760453ms","remote":"127.0.0.1:41840","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1151,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2023-11-28T00:44:26.853263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.962252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-2sfbm\" ","response":"range_response_count:1 size:4609"}
	{"level":"info","ts":"2023-11-28T00:44:26.854444Z","caller":"traceutil/trace.go:171","msg":"trace[1571027589] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-2sfbm; range_end:; response_count:1; response_revision:531; }","duration":"371.140046ms","start":"2023-11-28T00:44:26.483296Z","end":"2023-11-28T00:44:26.854436Z","steps":["trace[1571027589] 'agreement among raft nodes before linearized reading'  (duration: 369.947171ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T00:44:26.854487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.483284Z","time spent":"371.194775ms","remote":"127.0.0.1:41844","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4632,"request content":"key:\"/registry/pods/kube-system/kube-proxy-2sfbm\" "}
	{"level":"warn","ts":"2023-11-28T00:44:26.853176Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T00:44:26.477228Z","time spent":"375.902379ms","remote":"127.0.0.1:41826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":787,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"info","ts":"2023-11-28T00:54:20.722228Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":830}
	{"level":"info","ts":"2023-11-28T00:54:20.724762Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":830,"took":"2.153641ms","hash":3084872912}
	{"level":"info","ts":"2023-11-28T00:54:20.72484Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3084872912,"revision":830,"compact-revision":-1}
	{"level":"info","ts":"2023-11-28T00:59:20.730451Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1072}
	{"level":"info","ts":"2023-11-28T00:59:20.732558Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1072,"took":"1.773629ms","hash":2067898719}
	{"level":"info","ts":"2023-11-28T00:59:20.732617Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2067898719,"revision":1072,"compact-revision":830}
	{"level":"info","ts":"2023-11-28T01:03:59.027317Z","caller":"traceutil/trace.go:171","msg":"trace[1015352132] transaction","detail":"{read_only:false; response_revision:1541; number_of_response:1; }","duration":"131.95994ms","start":"2023-11-28T01:03:58.895294Z","end":"2023-11-28T01:03:59.027254Z","steps":["trace[1015352132] 'process raft request'  (duration: 131.47563ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T01:03:59.378546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.256198ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9018612268691037321 > lease_revoke:<id:7d288c13624f1c40>","response":"size:28"}
	{"level":"info","ts":"2023-11-28T01:04:20.740177Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1314}
	{"level":"info","ts":"2023-11-28T01:04:20.745169Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1314,"took":"4.631823ms","hash":2405858949}
	{"level":"info","ts":"2023-11-28T01:04:20.745272Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2405858949,"revision":1314,"compact-revision":1072}
	{"level":"info","ts":"2023-11-28T01:04:27.499339Z","caller":"traceutil/trace.go:171","msg":"trace[208776755] transaction","detail":"{read_only:false; response_revision:1564; number_of_response:1; }","duration":"304.987504ms","start":"2023-11-28T01:04:27.194323Z","end":"2023-11-28T01:04:27.499311Z","steps":["trace[208776755] 'process raft request'  (duration: 304.500687ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T01:04:27.499951Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T01:04:27.194308Z","time spent":"305.119774ms","remote":"127.0.0.1:41840","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1562 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-11-28T01:05:58.254641Z","caller":"traceutil/trace.go:171","msg":"trace[687594232] transaction","detail":"{read_only:false; response_revision:1639; number_of_response:1; }","duration":"227.840375ms","start":"2023-11-28T01:05:58.026713Z","end":"2023-11-28T01:05:58.254553Z","steps":["trace[687594232] 'process raft request'  (duration: 227.704943ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T01:06:39.700939Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.022882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T01:06:39.701209Z","caller":"traceutil/trace.go:171","msg":"trace[1587071608] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1671; }","duration":"296.36319ms","start":"2023-11-28T01:06:39.404809Z","end":"2023-11-28T01:06:39.701173Z","steps":["trace[1587071608] 'range keys from in-memory index tree'  (duration: 295.777069ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T01:06:42.098342Z","caller":"traceutil/trace.go:171","msg":"trace[2084911168] transaction","detail":"{read_only:false; response_revision:1674; number_of_response:1; }","duration":"111.941389ms","start":"2023-11-28T01:06:41.986373Z","end":"2023-11-28T01:06:42.098314Z","steps":["trace[2084911168] 'process raft request'  (duration: 111.449191ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:06:56 up 23 min,  0 users,  load average: 0.27, 0.20, 0.18
	Linux default-k8s-diff-port-488423 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] <==
	* I1128 01:02:23.748720       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1128 01:02:23.748720       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 01:02:23.750888       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 01:03:22.601892       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1128 01:04:22.601838       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 01:04:22.753617       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:04:22.753834       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 01:04:22.754823       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 01:04:23.756262       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:04:23.756350       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 01:04:23.756380       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 01:04:23.756433       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:04:23.756498       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 01:04:23.757671       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 01:05:22.602948       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 01:05:23.757070       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:05:23.757172       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 01:05:23.757200       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 01:05:23.758151       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:05:23.758252       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 01:05:23.758261       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 01:06:22.601532       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] <==
	* I1128 01:01:08.124365       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:01:37.569362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:01:38.134506       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:02:07.575706       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:02:08.147689       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:02:37.581936       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:02:38.156144       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:03:07.587544       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:03:08.165937       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:03:37.594819       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:03:38.176862       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:04:07.602719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:04:08.186538       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:04:37.607965       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:04:38.197619       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:05:07.614571       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:05:08.207500       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 01:05:35.416866       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="500.243µs"
	E1128 01:05:37.621396       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:05:38.218490       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 01:05:48.412699       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="175.158µs"
	E1128 01:06:07.628734       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:06:08.231559       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:06:37.636518       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:06:38.239805       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] <==
	* I1128 00:44:25.466336       1 server_others.go:69] "Using iptables proxy"
	I1128 00:44:26.002587       1 node.go:141] Successfully retrieved node IP: 192.168.72.242
	I1128 00:44:26.087238       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 00:44:26.087290       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 00:44:26.102784       1 server_others.go:152] "Using iptables Proxier"
	I1128 00:44:26.102851       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 00:44:26.103156       1 server.go:846] "Version info" version="v1.28.4"
	I1128 00:44:26.103173       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:44:26.104608       1 config.go:188] "Starting service config controller"
	I1128 00:44:26.104718       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 00:44:26.104740       1 config.go:97] "Starting endpoint slice config controller"
	I1128 00:44:26.104744       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 00:44:26.105989       1 config.go:315] "Starting node config controller"
	I1128 00:44:26.105998       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 00:44:26.205914       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 00:44:26.206086       1 shared_informer.go:318] Caches are synced for service config
	I1128 00:44:26.206474       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] <==
	* I1128 00:44:19.545213       1 serving.go:348] Generated self-signed cert in-memory
	W1128 00:44:22.631840       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 00:44:22.631948       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:44:22.631965       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 00:44:22.631975       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 00:44:22.721534       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1128 00:44:22.721630       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:44:22.724544       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 00:44:22.724715       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 00:44:22.726559       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1128 00:44:22.726682       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 00:44:22.825153       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:43:49 UTC, ends at Tue 2023-11-28 01:06:56 UTC. --
	Nov 28 01:04:15 default-k8s-diff-port-488423 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 01:04:20 default-k8s-diff-port-488423 kubelet[918]: E1128 01:04:20.394472     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:04:31 default-k8s-diff-port-488423 kubelet[918]: E1128 01:04:31.392711     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:04:46 default-k8s-diff-port-488423 kubelet[918]: E1128 01:04:46.392866     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:04:57 default-k8s-diff-port-488423 kubelet[918]: E1128 01:04:57.395605     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:05:11 default-k8s-diff-port-488423 kubelet[918]: E1128 01:05:11.393576     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:05:15 default-k8s-diff-port-488423 kubelet[918]: E1128 01:05:15.417228     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 01:05:15 default-k8s-diff-port-488423 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 01:05:15 default-k8s-diff-port-488423 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 01:05:15 default-k8s-diff-port-488423 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 01:05:23 default-k8s-diff-port-488423 kubelet[918]: E1128 01:05:23.411375     918 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 28 01:05:23 default-k8s-diff-port-488423 kubelet[918]: E1128 01:05:23.411474     918 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 28 01:05:23 default-k8s-diff-port-488423 kubelet[918]: E1128 01:05:23.411744     918 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mpksq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-fk9xx_kube-system(8b0d0cd6-41c5-4b67-98f9-f046e959e0e7): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 01:05:23 default-k8s-diff-port-488423 kubelet[918]: E1128 01:05:23.411810     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:05:35 default-k8s-diff-port-488423 kubelet[918]: E1128 01:05:35.394338     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:05:48 default-k8s-diff-port-488423 kubelet[918]: E1128 01:05:48.393801     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:06:02 default-k8s-diff-port-488423 kubelet[918]: E1128 01:06:02.394614     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:06:15 default-k8s-diff-port-488423 kubelet[918]: E1128 01:06:15.416201     918 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 01:06:15 default-k8s-diff-port-488423 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 01:06:15 default-k8s-diff-port-488423 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 01:06:15 default-k8s-diff-port-488423 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 01:06:16 default-k8s-diff-port-488423 kubelet[918]: E1128 01:06:16.393268     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:06:29 default-k8s-diff-port-488423 kubelet[918]: E1128 01:06:29.393551     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:06:42 default-k8s-diff-port-488423 kubelet[918]: E1128 01:06:42.393579     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	Nov 28 01:06:53 default-k8s-diff-port-488423 kubelet[918]: E1128 01:06:53.392781     918 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fk9xx" podUID="8b0d0cd6-41c5-4b67-98f9-f046e959e0e7"
	
	* 
	* ==> storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] <==
	* I1128 00:44:26.521255       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:44:26.530266       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:44:26.530345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 00:44:44.281979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 00:44:44.282476       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-488423_bd9676ab-00e6-4be8-b688-a9333b84eabd!
	I1128 00:44:44.283307       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b5cd0a1-2266-494b-b45d-c4f4999214bf", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-488423_bd9676ab-00e6-4be8-b688-a9333b84eabd became leader
	I1128 00:44:44.383405       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-488423_bd9676ab-00e6-4be8-b688-a9333b84eabd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
E1128 01:06:56.740612   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-488423 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fk9xx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-488423 describe pod metrics-server-57f55c9bc5-fk9xx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-488423 describe pod metrics-server-57f55c9bc5-fk9xx: exit status 1 (68.405126ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fk9xx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-488423 describe pod metrics-server-57f55c9bc5-fk9xx: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (309.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 00:58:50.987709   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-473615 -n no-preload-473615
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-28 01:03:26.471851802 +0000 UTC m=+5908.036878430
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-473615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-473615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.664µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-473615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-473615 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-473615 logs -n 25: (1.292346741s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-732472        | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-304541            | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-001086 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | disable-driver-mounts-001086                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:37 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473615             | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC | 28 Nov 23 00:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-732472             | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-488423  | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-304541                 | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473615                  | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-488423       | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC | 28 Nov 23 00:48 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 01:03 UTC | 28 Nov 23 01:03 UTC |
	| start   | -p newest-cni-517109 --memory=2200 --alsologtostderr   | newest-cni-517109            | jenkins | v1.32.0 | 28 Nov 23 01:03 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 01:03:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 01:03:24.573097   50808 out.go:296] Setting OutFile to fd 1 ...
	I1128 01:03:24.573341   50808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 01:03:24.573350   50808 out.go:309] Setting ErrFile to fd 2...
	I1128 01:03:24.573354   50808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 01:03:24.573509   50808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 01:03:24.574059   50808 out.go:303] Setting JSON to false
	I1128 01:03:24.574934   50808 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6352,"bootTime":1701127053,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 01:03:24.574995   50808 start.go:138] virtualization: kvm guest
	I1128 01:03:24.577264   50808 out.go:177] * [newest-cni-517109] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 01:03:24.578547   50808 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 01:03:24.579702   50808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 01:03:24.578594   50808 notify.go:220] Checking for updates...
	I1128 01:03:24.582008   50808 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 01:03:24.583282   50808 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 01:03:24.584473   50808 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 01:03:24.585604   50808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 01:03:24.587174   50808 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:03:24.587270   50808 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 01:03:24.587362   50808 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 01:03:24.587432   50808 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 01:03:24.622162   50808 out.go:177] * Using the kvm2 driver based on user configuration
	I1128 01:03:24.623729   50808 start.go:298] selected driver: kvm2
	I1128 01:03:24.623741   50808 start.go:902] validating driver "kvm2" against <nil>
	I1128 01:03:24.623751   50808 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 01:03:24.624452   50808 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 01:03:24.624528   50808 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 01:03:24.639264   50808 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 01:03:24.639342   50808 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1128 01:03:24.639371   50808 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1128 01:03:24.639581   50808 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1128 01:03:24.639645   50808 cni.go:84] Creating CNI manager for ""
	I1128 01:03:24.639661   50808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 01:03:24.639670   50808 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1128 01:03:24.639680   50808 start_flags.go:323] config:
	{Name:newest-cni-517109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.0 ClusterName:newest-cni-517109 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 01:03:24.639820   50808 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 01:03:24.642287   50808 out.go:177] * Starting control plane node newest-cni-517109 in cluster newest-cni-517109
	I1128 01:03:24.643557   50808 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 01:03:24.643592   50808 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I1128 01:03:24.643599   50808 cache.go:56] Caching tarball of preloaded images
	I1128 01:03:24.643668   50808 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 01:03:24.643678   50808 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.0 on crio
	I1128 01:03:24.643775   50808 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/config.json ...
	I1128 01:03:24.643793   50808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/config.json: {Name:mkb225b0388c351aee1c13fa6fd5011f80575259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 01:03:24.643915   50808 start.go:365] acquiring machines lock for newest-cni-517109: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 01:03:24.643944   50808 start.go:369] acquired machines lock for "newest-cni-517109" in 15.084µs
	I1128 01:03:24.643959   50808 start.go:93] Provisioning new machine with config: &{Name:newest-cni-517109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.0 ClusterName:newest-cni-517109 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 01:03:24.644025   50808 start.go:125] createHost starting for "" (driver="kvm2")
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:43:29 UTC, ends at Tue 2023-11-28 01:03:27 UTC. --
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.233193182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9b08d255-0e56-43ee-9225-69d5d803ea0c name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.240576312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0c2862dc-5e04-469a-8df2-089efddb38fe name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.241183667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133407241168998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=0c2862dc-5e04-469a-8df2-089efddb38fe name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.246822004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cccd439e-d019-4e0a-af5c-eb5ea40e353d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.246931239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cccd439e-d019-4e0a-af5c-eb5ea40e353d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.247244591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb,PodSandboxId:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701132553616756809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,},Annotations:map[string]string{io.kubernetes.container.hash: c96fec65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4,PodSandboxId:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701132553290577595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{io.kubernetes.container.hash: 54a3e66d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b,PodSandboxId:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701132552667379997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,},Annotations:map[string]string{io.kubernetes.container.hash: 2cea8ed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3,PodSandboxId:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701132530924570327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,},Anno
tations:map[string]string{io.kubernetes.container.hash: 14c398b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4,PodSandboxId:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701132530337556561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb01d80,},Annotations:map
[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba,PodSandboxId:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701132530037671548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5aa5271b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278,PodSandboxId:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701132529868536366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,},A
nnotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cccd439e-d019-4e0a-af5c-eb5ea40e353d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.260621466Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=91617ed4-ce02-45f4-98be-e708e6935c68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.262251085Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&PodSandboxMetadata{Name:kube-proxy-bv5lq,Uid:fe88f49f-5fc1-4877-a982-38fee04c9e2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132552384146840,Labels:map[string]string{controller-revision-hash: 86685fd499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:49:10.548698282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f26888ed552abc282c9d006c713d0562d14b494db8301d334b8489bf4a95f81e,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-mpqdq,Uid:8cef6d4c-e932-4c97-8d87-3b4
c3777c8b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132552261342336,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-mpqdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cef6d4c-e932-4c97-8d87-3b4c3777c8b8,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:49:11.924899142Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b8fc9309-7354-44e3-aa10-f4fb3c185f62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132551991404401,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7
354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-28T00:49:11.651344890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-kbrjg,Uid
:881031bb-af46-48a7-b609-7fb1c96b2056,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132551417380606,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:49:11.047924325Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-473615,Uid:aef9b4f333e88bdf9ad3d1f8cdb01d80,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132529362294469,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb0
1d80,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aef9b4f333e88bdf9ad3d1f8cdb01d80,kubernetes.io/config.seen: 2023-11-28T00:48:48.847708177Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-473615,Uid:f29c1d103d29f4a14ff04b50bbbde101,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132529350537190,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.195:8443,kubernetes.io/config.hash: f29c1d103d29f4a14ff04b50bbbde101,kubernetes.io/config.seen: 2023-11-28T00:48:48.847705985Z,kubernetes.io/config.source: file
,},RuntimeHandler:,},&PodSandbox{Id:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-473615,Uid:dec5bab202304e00670395448b299872,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132529346111707,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dec5bab202304e00670395448b299872,kubernetes.io/config.seen: 2023-11-28T00:48:48.847707057Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-473615,Uid:668da7c7e0f6810eef3e399e8e6f2210,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132
529305139028,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.195:2379,kubernetes.io/config.hash: 668da7c7e0f6810eef3e399e8e6f2210,kubernetes.io/config.seen: 2023-11-28T00:48:48.847701678Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=91617ed4-ce02-45f4-98be-e708e6935c68 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.265467249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=142a1c4f-8775-4a1e-8736-84c0aae93cb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.265553205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=142a1c4f-8775-4a1e-8736-84c0aae93cb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.265909460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb,PodSandboxId:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701132553616756809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,},Annotations:map[string]string{io.kubernetes.container.hash: c96fec65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4,PodSandboxId:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701132553290577595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{io.kubernetes.container.hash: 54a3e66d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b,PodSandboxId:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701132552667379997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,},Annotations:map[string]string{io.kubernetes.container.hash: 2cea8ed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3,PodSandboxId:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701132530924570327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,},Anno
tations:map[string]string{io.kubernetes.container.hash: 14c398b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4,PodSandboxId:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701132530337556561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb01d80,},Annotations:map
[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba,PodSandboxId:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701132530037671548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5aa5271b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278,PodSandboxId:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701132529868536366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,},A
nnotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=142a1c4f-8775-4a1e-8736-84c0aae93cb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.295188280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1f855e2e-a682-40db-994b-522d7606ad0e name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.295276005Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1f855e2e-a682-40db-994b-522d7606ad0e name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.296512096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a72a22b2-7d22-4ec7-b83f-deab33957f49 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.296809806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133407296799433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a72a22b2-7d22-4ec7-b83f-deab33957f49 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.297512390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dfad4276-45ec-4f60-b1b2-119ef68a6d97 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.297588132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dfad4276-45ec-4f60-b1b2-119ef68a6d97 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.297786705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb,PodSandboxId:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701132553616756809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,},Annotations:map[string]string{io.kubernetes.container.hash: c96fec65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4,PodSandboxId:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701132553290577595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{io.kubernetes.container.hash: 54a3e66d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b,PodSandboxId:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701132552667379997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,},Annotations:map[string]string{io.kubernetes.container.hash: 2cea8ed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3,PodSandboxId:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701132530924570327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,},Anno
tations:map[string]string{io.kubernetes.container.hash: 14c398b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4,PodSandboxId:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701132530337556561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb01d80,},Annotations:map
[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba,PodSandboxId:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701132530037671548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5aa5271b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278,PodSandboxId:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701132529868536366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,},A
nnotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dfad4276-45ec-4f60-b1b2-119ef68a6d97 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.340676512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=942705b3-057d-44b1-bc94-b7ace7e9bf65 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.340738427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=942705b3-057d-44b1-bc94-b7ace7e9bf65 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.341985638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=70a48c47-d577-4afa-adfd-03ea560cef0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.342496747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133407342480648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=70a48c47-d577-4afa-adfd-03ea560cef0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.343493046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=196a7124-bc49-420d-95c6-5915e8f01fc1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.343566219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=196a7124-bc49-420d-95c6-5915e8f01fc1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:27 no-preload-473615 crio[741]: time="2023-11-28 01:03:27.343757416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb,PodSandboxId:8957d6ed3cc966e3b836721428acbefe4c3fbfda1a2b1ae44172336256a79621,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701132553616756809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bv5lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe88f49f-5fc1-4877-a982-38fee04c9e2d,},Annotations:map[string]string{io.kubernetes.container.hash: c96fec65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4,PodSandboxId:d414344bff45051179f4bf4170323625abe5d4614e702700f666b8506881565c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701132553290577595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8fc9309-7354-44e3-aa10-f4fb3c185f62,},Annotations:map[string]string{io.kubernetes.container.hash: 54a3e66d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b,PodSandboxId:80b0964cadf5d8e8d5269d6832c774d172211b59b42184ec2db7849f7694103c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701132552667379997,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kbrjg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 881031bb-af46-48a7-b609-7fb1c96b2056,},Annotations:map[string]string{io.kubernetes.container.hash: 2cea8ed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3,PodSandboxId:0df9afb0e7e3c9a070809e4f05a24ea88395c68360cd7570744433bcfaaec601,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701132530924570327,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668da7c7e0f6810eef3e399e8e6f2210,},Anno
tations:map[string]string{io.kubernetes.container.hash: 14c398b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4,PodSandboxId:76d3768344aa061f7c23d244d1ba4c84841c0ce92b7d044f1d08872dc4990b19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701132530337556561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aef9b4f333e88bdf9ad3d1f8cdb01d80,},Annotations:map
[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba,PodSandboxId:7aba4e9fef68a95daeea9a95e45e17c576c835c50901bc834fda97389ce459f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701132530037671548,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f29c1d103d29f4a14ff04b50bbbde101,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5aa5271b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278,PodSandboxId:bfdb5d0121c2af09b54a5acc1d5766997d6a724da7565139175226d5ac1b17ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701132529868536366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-473615,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec5bab202304e00670395448b299872,},A
nnotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=196a7124-bc49-420d-95c6-5915e8f01fc1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d9ff96d344971       df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55   14 minutes ago      Running             kube-proxy                0                   8957d6ed3cc96       kube-proxy-bv5lq
	be4894b0fbd27       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   d414344bff450       storage-provisioner
	a55e3f9ef21a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   80b0964cadf5d       coredns-76f75df574-kbrjg
	e716c8ec94f44       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   0df9afb0e7e3c       etcd-no-preload-473615
	26ee6a4171332       4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9   14 minutes ago      Running             kube-scheduler            2                   76d3768344aa0       kube-scheduler-no-preload-473615
	b6eb74031eeb3       e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7   14 minutes ago      Running             kube-apiserver            2                   7aba4e9fef68a       kube-apiserver-no-preload-473615
	c04934db0c7ab       e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4   14 minutes ago      Running             kube-controller-manager   2                   bfdb5d0121c2a       kube-controller-manager-no-preload-473615
	
	* 
	* ==> coredns [a55e3f9ef21a0888a801827eb2ef31026c3bd1f4cb56a7ca88168e212db62c6b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:35073 - 35380 "HINFO IN 7970207649571234781.3920572336514307717. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010882015s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-473615
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-473615
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=no-preload-473615
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_48_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-473615
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 01:03:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 00:59:29 +0000   Tue, 28 Nov 2023 00:48:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 00:59:29 +0000   Tue, 28 Nov 2023 00:48:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 00:59:29 +0000   Tue, 28 Nov 2023 00:48:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 00:59:29 +0000   Tue, 28 Nov 2023 00:48:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.195
	  Hostname:    no-preload-473615
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad5a8ba507ca41a386ef5e8d7f5846b8
	  System UUID:                ad5a8ba5-07ca-41a3-86ef-5e8d7f5846b8
	  Boot ID:                    bdb44941-15f5-4e15-8e88-1f76195dc2ba
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.0
	  Kube-Proxy Version:         v1.29.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-kbrjg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-473615                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-473615             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-473615    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-bv5lq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-473615             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-mpqdq              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-473615 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-473615 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-473615 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node no-preload-473615 status is now: NodeNotReady
	  Normal  NodeReady                14m   kubelet          Node no-preload-473615 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-473615 event: Registered Node no-preload-473615 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 00:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070723] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.511762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.475541] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133405] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.473108] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.738423] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.118584] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.146182] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[  +0.118662] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +0.232415] systemd-fstab-generator[727]: Ignoring "noauto" for root device
	[Nov28 00:44] systemd-fstab-generator[1352]: Ignoring "noauto" for root device
	[ +19.530786] kauditd_printk_skb: 34 callbacks suppressed
	[Nov28 00:48] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.339943] systemd-fstab-generator[4120]: Ignoring "noauto" for root device
	[  +9.308883] systemd-fstab-generator[4450]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e716c8ec94f44fecf8ef86b4a1f0fff5462f7b14a046f897025b638110ac22c3] <==
	* {"level":"info","ts":"2023-11-28T00:48:52.455088Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.195:2380"}
	{"level":"info","ts":"2023-11-28T00:48:52.455447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.195:2380"}
	{"level":"info","ts":"2023-11-28T00:48:52.456219Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T00:48:52.489138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:52.489199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:52.489227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 received MsgPreVoteResp from 568dd214a70d80b9 at term 1"}
	{"level":"info","ts":"2023-11-28T00:48:52.489239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:52.489245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 received MsgVoteResp from 568dd214a70d80b9 at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:52.489253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"568dd214a70d80b9 became leader at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:52.48926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 568dd214a70d80b9 elected leader 568dd214a70d80b9 at term 2"}
	{"level":"info","ts":"2023-11-28T00:48:52.493302Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"568dd214a70d80b9","local-member-attributes":"{Name:no-preload-473615 ClientURLs:[https://192.168.61.195:2379]}","request-path":"/0/members/568dd214a70d80b9/attributes","cluster-id":"986b17048fbf010b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T00:48:52.494125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:48:52.49467Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:52.494847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T00:48:52.498575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.195:2379"}
	{"level":"info","ts":"2023-11-28T00:48:52.498693Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"986b17048fbf010b","local-member-id":"568dd214a70d80b9","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:52.498775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:52.498811Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T00:48:52.499189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T00:48:52.499232Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T00:48:52.50516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T00:49:10.648662Z","caller":"traceutil/trace.go:171","msg":"trace[1345949121] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"113.257532ms","start":"2023-11-28T00:49:10.535364Z","end":"2023-11-28T00:49:10.648622Z","steps":["trace[1345949121] 'process raft request'  (duration: 76.798444ms)","trace[1345949121] 'compare'  (duration: 34.061755ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-28T00:58:52.669976Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":680}
	{"level":"info","ts":"2023-11-28T00:58:52.673696Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":680,"took":"3.288278ms","hash":1773693762}
	{"level":"info","ts":"2023-11-28T00:58:52.673779Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1773693762,"revision":680,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  01:03:27 up 20 min,  0 users,  load average: 0.23, 0.27, 0.26
	Linux no-preload-473615 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b6eb74031eeb360d38b73f1aee944fe0b26dc37207fe030ae8638cca11ddcfba] <==
	* I1128 00:56:55.522134       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:58:54.523217       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:58:54.523581       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1128 00:58:55.524189       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:58:55.524264       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:58:55.524273       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:58:55.524388       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:58:55.524633       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:58:55.525929       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:59:55.524962       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:59:55.525129       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 00:59:55.525138       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 00:59:55.526374       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 00:59:55.526455       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:59:55.526462       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 01:01:55.525663       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:01:55.526132       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 01:01:55.526184       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 01:01:55.526890       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 01:01:55.526990       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 01:01:55.528256       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [c04934db0c7aba72870e308cff714f11993ca790a97d0c06f7d7008c59b61278] <==
	* I1128 00:57:40.325303       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:58:09.800517       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:58:10.336164       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:58:39.806723       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:58:40.344813       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:59:09.813243       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:59:10.355556       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 00:59:39.818966       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 00:59:40.364836       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:00:09.825424       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:00:10.374889       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 01:00:24.264405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="204.183µs"
	I1128 01:00:35.260193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="216.228µs"
	E1128 01:00:39.830864       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:00:40.383228       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:01:09.836887       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:01:10.392933       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:01:39.843345       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:01:40.402268       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:02:09.849849       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:02:10.412649       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:02:39.855529       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:02:40.423315       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 01:03:09.864124       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 01:03:10.432587       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [d9ff96d344971a04b78226e1c5a9ebc442a23e99f6bb048a60b61d47ea0af8bb] <==
	* I1128 00:49:13.808877       1 server_others.go:72] "Using iptables proxy"
	I1128 00:49:13.825858       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.195"]
	I1128 00:49:13.883786       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1128 00:49:13.883873       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 00:49:13.883900       1 server_others.go:168] "Using iptables Proxier"
	I1128 00:49:13.887159       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 00:49:13.887496       1 server.go:865] "Version info" version="v1.29.0-rc.0"
	I1128 00:49:13.887548       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 00:49:13.888944       1 config.go:188] "Starting service config controller"
	I1128 00:49:13.889012       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 00:49:13.889206       1 config.go:97] "Starting endpoint slice config controller"
	I1128 00:49:13.889220       1 config.go:315] "Starting node config controller"
	I1128 00:49:13.889381       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 00:49:13.889226       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 00:49:13.990360       1 shared_informer.go:318] Caches are synced for service config
	I1128 00:49:13.990439       1 shared_informer.go:318] Caches are synced for node config
	I1128 00:49:13.990514       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [26ee6a417133208323eff9b8d4fd4e62628ee8dc816843c4cd608a63b8118dc4] <==
	* W1128 00:48:54.533640       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:48:54.533682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 00:48:54.533592       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:54.533727       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:54.533745       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:54.533787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:54.533988       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:54.534095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:55.457212       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:55.457266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:55.555810       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:48:55.555881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 00:48:55.572607       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 00:48:55.572715       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1128 00:48:55.709193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:48:55.709265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 00:48:55.735655       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 00:48:55.735812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1128 00:48:55.779579       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 00:48:55.779638       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 00:48:55.792797       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 00:48:55.792850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1128 00:48:55.832668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 00:48:55.832757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1128 00:48:58.522458       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:43:29 UTC, ends at Tue 2023-11-28 01:03:27 UTC. --
	Nov 28 01:00:50 no-preload-473615 kubelet[4457]: E1128 01:00:50.243930    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:00:58 no-preload-473615 kubelet[4457]: E1128 01:00:58.319941    4457 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 01:00:58 no-preload-473615 kubelet[4457]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 01:00:58 no-preload-473615 kubelet[4457]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 01:00:58 no-preload-473615 kubelet[4457]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 01:01:02 no-preload-473615 kubelet[4457]: E1128 01:01:02.244926    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:01:17 no-preload-473615 kubelet[4457]: E1128 01:01:17.244086    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:01:30 no-preload-473615 kubelet[4457]: E1128 01:01:30.244433    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:01:44 no-preload-473615 kubelet[4457]: E1128 01:01:44.243387    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:01:58 no-preload-473615 kubelet[4457]: E1128 01:01:58.249738    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:01:58 no-preload-473615 kubelet[4457]: E1128 01:01:58.322182    4457 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 01:01:58 no-preload-473615 kubelet[4457]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 01:01:58 no-preload-473615 kubelet[4457]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 01:01:58 no-preload-473615 kubelet[4457]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 01:02:12 no-preload-473615 kubelet[4457]: E1128 01:02:12.243968    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:02:27 no-preload-473615 kubelet[4457]: E1128 01:02:27.243672    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:02:40 no-preload-473615 kubelet[4457]: E1128 01:02:40.243825    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:02:53 no-preload-473615 kubelet[4457]: E1128 01:02:53.243267    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:02:58 no-preload-473615 kubelet[4457]: E1128 01:02:58.322617    4457 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 01:02:58 no-preload-473615 kubelet[4457]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 01:02:58 no-preload-473615 kubelet[4457]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 01:02:58 no-preload-473615 kubelet[4457]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 01:03:04 no-preload-473615 kubelet[4457]: E1128 01:03:04.244927    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:03:15 no-preload-473615 kubelet[4457]: E1128 01:03:15.243782    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	Nov 28 01:03:26 no-preload-473615 kubelet[4457]: E1128 01:03:26.244557    4457 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mpqdq" podUID="8cef6d4c-e932-4c97-8d87-3b4c3777c8b8"
	
	* 
	* ==> storage-provisioner [be4894b0fbd271d45c27b0679f1e0301e8036d8f952caa75e5c0862c06fbcdf4] <==
	* I1128 00:49:13.549592       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:49:13.582149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:49:13.582267       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 00:49:13.599096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 00:49:13.600485       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"52391cb7-4015-4290-b5d2-dc1b45117cb2", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-473615_eb5b9af1-1d07-40be-8cb8-3846b7bbc919 became leader
	I1128 00:49:13.601326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-473615_eb5b9af1-1d07-40be-8cb8-3846b7bbc919!
	I1128 00:49:13.702387       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-473615_eb5b9af1-1d07-40be-8cb8-3846b7bbc919!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-473615 -n no-preload-473615
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-473615 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mpqdq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-473615 describe pod metrics-server-57f55c9bc5-mpqdq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-473615 describe pod metrics-server-57f55c9bc5-mpqdq: exit status 1 (75.288486ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-mpqdq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-473615 describe pod metrics-server-57f55c9bc5-mpqdq: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (309.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (182.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 01:00:27.680999   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 01:01:55.432506   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-732472 -n old-k8s-version-732472
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-28 01:03:20.595635462 +0000 UTC m=+5902.160662090
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-732472 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-732472 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.623µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-732472 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-732472 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-732472 logs -n 25: (1.614707896s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-options-188325                                 | cert-options-188325          | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:33 UTC |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:33 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-747416                              | cert-expiration-747416       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-732472        | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC | 28 Nov 23 00:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:35 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-304541            | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-789586                              | stopped-upgrade-789586       | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-001086 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:36 UTC |
	|         | disable-driver-mounts-001086                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:36 UTC | 28 Nov 23 00:37 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-473615             | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC | 28 Nov 23 00:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:37 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-732472             | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-488423  | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-732472                              | old-k8s-version-732472       | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC | 28 Nov 23 00:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-304541                 | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-304541                                  | embed-certs-304541           | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:48 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-473615                  | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-473615                                   | no-preload-473615            | jenkins | v1.32.0 | 28 Nov 23 00:39 UTC | 28 Nov 23 00:49 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-488423       | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-488423 | jenkins | v1.32.0 | 28 Nov 23 00:40 UTC | 28 Nov 23 00:48 UTC |
	|         | default-k8s-diff-port-488423                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 00:40:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 00:40:42.238362   46126 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:40:42.238498   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238513   46126 out.go:309] Setting ErrFile to fd 2...
	I1128 00:40:42.238520   46126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:40:42.238712   46126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:40:42.239236   46126 out.go:303] Setting JSON to false
	I1128 00:40:42.240138   46126 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4989,"bootTime":1701127053,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:40:42.240194   46126 start.go:138] virtualization: kvm guest
	I1128 00:40:42.242505   46126 out.go:177] * [default-k8s-diff-port-488423] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:40:42.243937   46126 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:40:42.243990   46126 notify.go:220] Checking for updates...
	I1128 00:40:42.245317   46126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:40:42.246717   46126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:40:42.248096   46126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:40:42.249294   46126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:40:42.250596   46126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:40:42.252296   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:40:42.252793   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.252854   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.267605   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I1128 00:40:42.267958   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.268457   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.268479   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.268774   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.268971   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.269215   46126 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:40:42.269470   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:40:42.269501   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:40:42.283984   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34957
	I1128 00:40:42.284338   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:40:42.284786   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:40:42.284808   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:40:42.285077   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:40:42.285263   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:40:42.319077   46126 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 00:40:42.320321   46126 start.go:298] selected driver: kvm2
	I1128 00:40:42.320332   46126 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.320481   46126 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:40:42.321242   46126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.321325   46126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 00:40:42.335477   46126 install.go:137] /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I1128 00:40:42.335818   46126 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 00:40:42.335887   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:40:42.335907   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:40:42.335922   46126 start_flags.go:323] config:
	{Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-48842
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:40:42.336092   46126 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:40:42.337823   46126 out.go:177] * Starting control plane node default-k8s-diff-port-488423 in cluster default-k8s-diff-port-488423
	I1128 00:40:40.713025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:42.338980   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:40:42.339010   46126 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 00:40:42.339024   46126 cache.go:56] Caching tarball of preloaded images
	I1128 00:40:42.339105   46126 preload.go:174] Found /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 00:40:42.339117   46126 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 00:40:42.339232   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:40:42.339416   46126 start.go:365] acquiring machines lock for default-k8s-diff-port-488423: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:40:43.785024   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:49.865013   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:52.936964   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:40:59.017058   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:02.089017   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:08.169021   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:11.241040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:17.321032   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:20.393000   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:26.473039   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:29.544989   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:35.625074   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:38.697020   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:44.777040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:47.849040   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:53.929055   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:41:57.001005   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:03.081016   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:06.153078   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:12.233029   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:15.305165   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:21.385067   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:24.457038   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:30.537025   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:33.608998   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:39.689061   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:42.761012   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:48.841003   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:51.912985   45269 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.172:22: connect: no route to host
	I1128 00:42:54.916816   45580 start.go:369] acquired machines lock for "embed-certs-304541" in 3m47.030120592s
	I1128 00:42:54.916877   45580 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:42:54.916890   45580 fix.go:54] fixHost starting: 
	I1128 00:42:54.917233   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:42:54.917266   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:42:54.932296   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1128 00:42:54.932712   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:42:54.933230   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:42:54.933254   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:42:54.933574   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:42:54.933837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:42:54.934006   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:42:54.935712   45580 fix.go:102] recreateIfNeeded on embed-certs-304541: state=Stopped err=<nil>
	I1128 00:42:54.935737   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	W1128 00:42:54.935937   45580 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:42:54.937893   45580 out.go:177] * Restarting existing kvm2 VM for "embed-certs-304541" ...
	I1128 00:42:54.914751   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:42:54.914794   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:42:54.916666   45269 machine.go:91] provisioned docker machine in 4m37.413850055s
	I1128 00:42:54.916713   45269 fix.go:56] fixHost completed within 4m37.433506318s
	I1128 00:42:54.916719   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 4m37.433526985s
	W1128 00:42:54.916738   45269 start.go:691] error starting host: provision: host is not running
	W1128 00:42:54.916844   45269 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 00:42:54.916854   45269 start.go:706] Will try again in 5 seconds ...
	I1128 00:42:54.939120   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Start
	I1128 00:42:54.939284   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring networks are active...
	I1128 00:42:54.940122   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network default is active
	I1128 00:42:54.940636   45580 main.go:141] libmachine: (embed-certs-304541) Ensuring network mk-embed-certs-304541 is active
	I1128 00:42:54.941025   45580 main.go:141] libmachine: (embed-certs-304541) Getting domain xml...
	I1128 00:42:54.941883   45580 main.go:141] libmachine: (embed-certs-304541) Creating domain...
	I1128 00:42:56.157644   45580 main.go:141] libmachine: (embed-certs-304541) Waiting to get IP...
	I1128 00:42:56.158479   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.158803   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.158888   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.158791   46474 retry.go:31] will retry after 235.266272ms: waiting for machine to come up
	I1128 00:42:56.395238   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.395630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.395664   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.395579   46474 retry.go:31] will retry after 352.110542ms: waiting for machine to come up
	I1128 00:42:56.749150   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:56.749542   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:56.749570   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:56.749500   46474 retry.go:31] will retry after 364.122623ms: waiting for machine to come up
	I1128 00:42:57.115054   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.115497   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.115526   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.115450   46474 retry.go:31] will retry after 583.197763ms: waiting for machine to come up
	I1128 00:42:57.700134   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:57.700551   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:57.700577   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:57.700497   46474 retry.go:31] will retry after 515.615548ms: waiting for machine to come up
	I1128 00:42:59.917964   45269 start.go:365] acquiring machines lock for old-k8s-version-732472: {Name:mka7a548ba547848a87c7203a428a8f291ed6bb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 00:42:58.218252   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.218630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.218668   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.218611   46474 retry.go:31] will retry after 690.258077ms: waiting for machine to come up
	I1128 00:42:58.910090   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:58.910438   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:58.910464   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:58.910413   46474 retry.go:31] will retry after 737.779074ms: waiting for machine to come up
	I1128 00:42:59.649308   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:42:59.649634   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:42:59.649661   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:42:59.649609   46474 retry.go:31] will retry after 1.23938471s: waiting for machine to come up
	I1128 00:43:00.890867   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:00.891318   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:00.891356   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:00.891298   46474 retry.go:31] will retry after 1.475598535s: waiting for machine to come up
	I1128 00:43:02.368630   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:02.369159   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:02.369189   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:02.369085   46474 retry.go:31] will retry after 2.323321s: waiting for machine to come up
	I1128 00:43:04.695735   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:04.696175   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:04.696208   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:04.696131   46474 retry.go:31] will retry after 1.903335453s: waiting for machine to come up
	I1128 00:43:06.601229   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:06.601657   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:06.601687   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:06.601612   46474 retry.go:31] will retry after 2.205948796s: waiting for machine to come up
	I1128 00:43:08.809792   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:08.810161   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:08.810188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:08.810149   46474 retry.go:31] will retry after 3.31430253s: waiting for machine to come up
	I1128 00:43:12.126852   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:12.127294   45580 main.go:141] libmachine: (embed-certs-304541) DBG | unable to find current IP address of domain embed-certs-304541 in network mk-embed-certs-304541
	I1128 00:43:12.127323   45580 main.go:141] libmachine: (embed-certs-304541) DBG | I1128 00:43:12.127249   46474 retry.go:31] will retry after 3.492216742s: waiting for machine to come up
	I1128 00:43:16.981905   45815 start.go:369] acquired machines lock for "no-preload-473615" in 3m38.128436656s
	I1128 00:43:16.981988   45815 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:16.982000   45815 fix.go:54] fixHost starting: 
	I1128 00:43:16.982400   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:16.982434   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:17.001935   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I1128 00:43:17.002390   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:17.002899   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:43:17.002930   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:17.003303   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:17.003515   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:17.003658   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:43:17.005243   45815 fix.go:102] recreateIfNeeded on no-preload-473615: state=Stopped err=<nil>
	I1128 00:43:17.005273   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	W1128 00:43:17.005442   45815 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:17.007831   45815 out.go:177] * Restarting existing kvm2 VM for "no-preload-473615" ...
	I1128 00:43:15.620590   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621046   45580 main.go:141] libmachine: (embed-certs-304541) Found IP for machine: 192.168.50.93
	I1128 00:43:15.621071   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has current primary IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.621083   45580 main.go:141] libmachine: (embed-certs-304541) Reserving static IP address...
	I1128 00:43:15.621440   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.621473   45580 main.go:141] libmachine: (embed-certs-304541) DBG | skip adding static IP to network mk-embed-certs-304541 - found existing host DHCP lease matching {name: "embed-certs-304541", mac: "52:54:00:0a:1d:4f", ip: "192.168.50.93"}
	I1128 00:43:15.621484   45580 main.go:141] libmachine: (embed-certs-304541) Reserved static IP address: 192.168.50.93
	I1128 00:43:15.621498   45580 main.go:141] libmachine: (embed-certs-304541) Waiting for SSH to be available...
	I1128 00:43:15.621516   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Getting to WaitForSSH function...
	I1128 00:43:15.623594   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623865   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.623897   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.623968   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH client type: external
	I1128 00:43:15.623989   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa (-rw-------)
	I1128 00:43:15.624044   45580 main.go:141] libmachine: (embed-certs-304541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.93 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:15.624057   45580 main.go:141] libmachine: (embed-certs-304541) DBG | About to run SSH command:
	I1128 00:43:15.624068   45580 main.go:141] libmachine: (embed-certs-304541) DBG | exit 0
	I1128 00:43:15.708868   45580 main.go:141] libmachine: (embed-certs-304541) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:15.709246   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetConfigRaw
	I1128 00:43:15.709989   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.712312   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712623   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.712660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.712968   45580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/config.json ...
	I1128 00:43:15.713166   45580 machine.go:88] provisioning docker machine ...
	I1128 00:43:15.713183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:15.713360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713552   45580 buildroot.go:166] provisioning hostname "embed-certs-304541"
	I1128 00:43:15.713573   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.713731   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.716027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716386   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.716419   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.716530   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.716703   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.716856   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.717034   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.717229   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.717565   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.717579   45580 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-304541 && echo "embed-certs-304541" | sudo tee /etc/hostname
	I1128 00:43:15.841766   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-304541
	
	I1128 00:43:15.841821   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.844529   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.844872   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.844919   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.845037   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:15.845231   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845360   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:15.845476   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:15.845616   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:15.845976   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:15.846002   45580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-304541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-304541/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-304541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:15.965821   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:15.965855   45580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:15.965876   45580 buildroot.go:174] setting up certificates
	I1128 00:43:15.965890   45580 provision.go:83] configureAuth start
	I1128 00:43:15.965903   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetMachineName
	I1128 00:43:15.966183   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:15.968916   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969285   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.969313   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.969483   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:15.971549   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.971913   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:15.971949   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:15.972092   45580 provision.go:138] copyHostCerts
	I1128 00:43:15.972168   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:15.972182   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:15.972260   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:15.972415   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:15.972427   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:15.972472   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:15.972562   45580 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:15.972572   45580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:15.972603   45580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:15.972663   45580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.embed-certs-304541 san=[192.168.50.93 192.168.50.93 localhost 127.0.0.1 minikube embed-certs-304541]
	I1128 00:43:16.272269   45580 provision.go:172] copyRemoteCerts
	I1128 00:43:16.272333   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:16.272354   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.274793   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275102   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.275138   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.275285   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.275495   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.275628   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.275752   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.361853   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:43:16.386340   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:16.410490   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:16.433471   45580 provision.go:86] duration metric: configureAuth took 467.56808ms
	I1128 00:43:16.433505   45580 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:16.433686   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:16.433760   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.436514   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.436987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.437029   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.437129   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.437316   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437472   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.437614   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.437748   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.438055   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.438072   45580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:16.732374   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:16.732407   45580 machine.go:91] provisioned docker machine in 1.019227514s
	I1128 00:43:16.732419   45580 start.go:300] post-start starting for "embed-certs-304541" (driver="kvm2")
	I1128 00:43:16.732429   45580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:16.732474   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.732847   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:16.732879   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.735564   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.735987   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.736027   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.736210   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.736393   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.736549   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.736714   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.824741   45580 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:16.829313   45580 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:16.829347   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:16.829426   45580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:16.829529   45580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:16.829642   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:16.839740   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:16.862881   45580 start.go:303] post-start completed in 130.432418ms
	I1128 00:43:16.862911   45580 fix.go:56] fixHost completed within 21.946020541s
	I1128 00:43:16.862938   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.865721   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.866144   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.866336   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.866545   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866744   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.866869   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.867046   45580 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:16.867350   45580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.93 22 <nil> <nil>}
	I1128 00:43:16.867359   45580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:16.981759   45580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132196.930241591
	
	I1128 00:43:16.981779   45580 fix.go:206] guest clock: 1701132196.930241591
	I1128 00:43:16.981786   45580 fix.go:219] Guest: 2023-11-28 00:43:16.930241591 +0000 UTC Remote: 2023-11-28 00:43:16.862915941 +0000 UTC m=+249.133993071 (delta=67.32565ms)
	I1128 00:43:16.981804   45580 fix.go:190] guest clock delta is within tolerance: 67.32565ms
	I1128 00:43:16.981809   45580 start.go:83] releasing machines lock for "embed-certs-304541", held for 22.064954687s
	I1128 00:43:16.981848   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.982121   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:16.984621   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.984927   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.984986   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.985171   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985675   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985825   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:43:16.985892   45580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:16.985926   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.986025   45580 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:16.986054   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:43:16.988651   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.988839   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989079   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989113   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989367   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989411   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:16.989451   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:16.989491   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:43:16.989544   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989648   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:43:16.989692   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989781   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:43:16.989860   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:16.989933   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:43:17.104567   45580 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:17.110844   45580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:17.254201   45580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:17.262078   45580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:17.262154   45580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:17.282179   45580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:17.282209   45580 start.go:472] detecting cgroup driver to use...
	I1128 00:43:17.282271   45580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:17.296891   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:17.311479   45580 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:17.311552   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:17.325946   45580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:17.340513   45580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:17.469601   45580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:17.605127   45580 docker.go:219] disabling docker service ...
	I1128 00:43:17.605199   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:17.621850   45580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:17.634608   45580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:17.753009   45580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:17.859260   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:17.872564   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:17.889701   45580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:17.889755   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.898724   45580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:17.898799   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.907565   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.916243   45580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:17.925280   45580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:17.934933   45580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:17.943902   45580 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:17.943960   45580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:17.957608   45580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:17.967379   45580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:18.074173   45580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:18.251191   45580 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:18.251264   45580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:18.259963   45580 start.go:540] Will wait 60s for crictl version
	I1128 00:43:18.260041   45580 ssh_runner.go:195] Run: which crictl
	I1128 00:43:18.263936   45580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:18.303087   45580 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:18.303181   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.344939   45580 ssh_runner.go:195] Run: crio --version
	I1128 00:43:18.402444   45580 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:17.009281   45815 main.go:141] libmachine: (no-preload-473615) Calling .Start
	I1128 00:43:17.009442   45815 main.go:141] libmachine: (no-preload-473615) Ensuring networks are active...
	I1128 00:43:17.010161   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network default is active
	I1128 00:43:17.010485   45815 main.go:141] libmachine: (no-preload-473615) Ensuring network mk-no-preload-473615 is active
	I1128 00:43:17.010860   45815 main.go:141] libmachine: (no-preload-473615) Getting domain xml...
	I1128 00:43:17.011780   45815 main.go:141] libmachine: (no-preload-473615) Creating domain...
	I1128 00:43:18.289916   45815 main.go:141] libmachine: (no-preload-473615) Waiting to get IP...
	I1128 00:43:18.290892   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.291348   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.291434   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.291321   46604 retry.go:31] will retry after 208.579367ms: waiting for machine to come up
	I1128 00:43:18.501947   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.502401   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.502431   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.502362   46604 retry.go:31] will retry after 296.427399ms: waiting for machine to come up
	I1128 00:43:18.403974   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetIP
	I1128 00:43:18.406811   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407171   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:43:18.407201   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:43:18.407459   45580 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:18.411727   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:18.423460   45580 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:18.423570   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:18.463722   45580 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:18.463797   45580 ssh_runner.go:195] Run: which lz4
	I1128 00:43:18.468008   45580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:18.472523   45580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:18.472560   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:43:20.378745   45580 crio.go:444] Took 1.910818 seconds to copy over tarball
	I1128 00:43:20.378836   45580 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:18.801131   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:18.801707   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:18.801741   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:18.801666   46604 retry.go:31] will retry after 355.365314ms: waiting for machine to come up
	I1128 00:43:19.159088   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.159590   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.159628   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.159550   46604 retry.go:31] will retry after 584.908889ms: waiting for machine to come up
	I1128 00:43:19.746379   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:19.746941   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:19.746978   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:19.746901   46604 retry.go:31] will retry after 707.432097ms: waiting for machine to come up
	I1128 00:43:20.455880   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:20.456378   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:20.456402   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:20.456346   46604 retry.go:31] will retry after 598.57984ms: waiting for machine to come up
	I1128 00:43:21.056102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.056548   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.056579   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.056500   46604 retry.go:31] will retry after 742.55033ms: waiting for machine to come up
	I1128 00:43:21.800382   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:21.800895   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:21.800926   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:21.800841   46604 retry.go:31] will retry after 1.138217867s: waiting for machine to come up
	I1128 00:43:22.941401   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:22.941902   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:22.941932   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:22.941867   46604 retry.go:31] will retry after 1.552423219s: waiting for machine to come up
	I1128 00:43:23.310969   45580 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.932089296s)
	I1128 00:43:23.311004   45580 crio.go:451] Took 2.932228 seconds to extract the tarball
	I1128 00:43:23.311017   45580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:43:23.351844   45580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:23.397599   45580 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:43:23.397625   45580 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:43:23.397705   45580 ssh_runner.go:195] Run: crio config
	I1128 00:43:23.460298   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:23.460326   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:23.460348   45580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:23.460383   45580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.93 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-304541 NodeName:embed-certs-304541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.93"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.93 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:23.460547   45580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.93
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-304541"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.93
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.93"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:23.460641   45580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-304541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:23.460696   45580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:43:23.470334   45580 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:23.470410   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:23.480675   45580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1128 00:43:23.497482   45580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:43:23.513709   45580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1128 00:43:23.530363   45580 ssh_runner.go:195] Run: grep 192.168.50.93	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:23.533938   45580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.93	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:23.546399   45580 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541 for IP: 192.168.50.93
	I1128 00:43:23.546443   45580 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:23.546632   45580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:23.546695   45580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:23.546799   45580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/client.key
	I1128 00:43:23.546892   45580 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key.9bda4d83
	I1128 00:43:23.546960   45580 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key
	I1128 00:43:23.547122   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:23.547178   45580 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:23.547196   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:23.547237   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:23.547280   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:23.547317   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:23.547392   45580 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:23.548287   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:23.571910   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 00:43:23.597339   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:23.621977   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/embed-certs-304541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:43:23.648048   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:23.671213   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:23.695307   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:23.719122   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:23.743153   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:23.766469   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:23.789932   45580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:23.813950   45580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:23.830291   45580 ssh_runner.go:195] Run: openssl version
	I1128 00:43:23.837945   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:23.847572   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852284   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.852334   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:23.860003   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:23.872829   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:23.886286   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.892997   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.893079   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:23.899923   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:23.909771   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:23.919498   45580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924066   45580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.924126   45580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:23.929583   45580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:23.939366   45580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:23.944091   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:23.950255   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:23.956493   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:23.962278   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:23.970032   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:23.977660   45580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:23.984257   45580 kubeadm.go:404] StartCluster: {Name:embed-certs-304541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-304541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:23.984408   45580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:23.984471   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:24.026147   45580 cri.go:89] found id: ""
	I1128 00:43:24.026222   45580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:24.035520   45580 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:24.035550   45580 kubeadm.go:636] restartCluster start
	I1128 00:43:24.035631   45580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:24.044318   45580 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.045591   45580 kubeconfig.go:92] found "embed-certs-304541" server: "https://192.168.50.93:8443"
	I1128 00:43:24.047987   45580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:24.056482   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.056541   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.067055   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.067072   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.067108   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.076950   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.577344   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:24.577441   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:24.588707   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.077862   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.077965   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.089729   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:25.577938   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:25.578019   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:25.593191   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.077819   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.077891   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.091224   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:26.577757   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:26.577844   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:26.588769   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.077106   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.077235   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.088668   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:27.577169   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:27.577249   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:27.588221   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:24.496599   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:24.496989   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:24.497018   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:24.496943   46604 retry.go:31] will retry after 2.05343917s: waiting for machine to come up
	I1128 00:43:26.552249   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:26.552684   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:26.552716   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:26.552636   46604 retry.go:31] will retry after 2.338063311s: waiting for machine to come up
	I1128 00:43:28.077161   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.077265   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.088552   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.577077   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:28.577168   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:28.588335   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.077927   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.078027   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.089679   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:29.577193   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:29.577293   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:29.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.077430   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.077542   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.088547   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:30.577088   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:30.577203   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:30.588230   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.077809   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.077907   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.090329   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:31.577897   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:31.577975   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:31.591561   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.077101   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.077206   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.087945   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:32.577446   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:32.577528   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:32.588542   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:28.893450   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:28.893812   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:28.893841   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:28.893761   46604 retry.go:31] will retry after 3.578756905s: waiting for machine to come up
	I1128 00:43:32.473719   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:32.474199   45815 main.go:141] libmachine: (no-preload-473615) DBG | unable to find current IP address of domain no-preload-473615 in network mk-no-preload-473615
	I1128 00:43:32.474234   45815 main.go:141] libmachine: (no-preload-473615) DBG | I1128 00:43:32.474155   46604 retry.go:31] will retry after 3.070637163s: waiting for machine to come up
	I1128 00:43:36.805769   46126 start.go:369] acquired machines lock for "default-k8s-diff-port-488423" in 2m54.466321295s
	I1128 00:43:36.805830   46126 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:36.805840   46126 fix.go:54] fixHost starting: 
	I1128 00:43:36.806271   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:36.806311   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:36.825195   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I1128 00:43:36.825723   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:36.826325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:43:36.826348   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:36.826703   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:36.826932   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:36.827106   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:43:36.828683   46126 fix.go:102] recreateIfNeeded on default-k8s-diff-port-488423: state=Stopped err=<nil>
	I1128 00:43:36.828709   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	W1128 00:43:36.828895   46126 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:36.830377   46126 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-488423" ...
	I1128 00:43:36.831614   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Start
	I1128 00:43:36.831781   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring networks are active...
	I1128 00:43:36.832447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network default is active
	I1128 00:43:36.832841   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Ensuring network mk-default-k8s-diff-port-488423 is active
	I1128 00:43:36.833220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Getting domain xml...
	I1128 00:43:36.833947   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Creating domain...
	I1128 00:43:33.077031   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.077109   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.088430   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:33.578007   45580 api_server.go:166] Checking apiserver status ...
	I1128 00:43:33.578093   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:33.589185   45580 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:34.056684   45580 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:43:34.056718   45580 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:43:34.056733   45580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:43:34.056836   45580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:34.096078   45580 cri.go:89] found id: ""
	I1128 00:43:34.096157   45580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:43:34.111200   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:43:34.119603   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:43:34.119654   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128150   45580 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:43:34.128170   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.236389   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:34.879134   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.070594   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.159436   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:35.223694   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:43:35.223787   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.238511   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:35.753955   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.254449   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:36.753943   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.253987   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.753515   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:43:37.777619   45580 api_server.go:72] duration metric: took 2.553922938s to wait for apiserver process to appear ...
	I1128 00:43:37.777646   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:43:35.548294   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.548718   45815 main.go:141] libmachine: (no-preload-473615) Found IP for machine: 192.168.61.195
	I1128 00:43:35.548746   45815 main.go:141] libmachine: (no-preload-473615) Reserving static IP address...
	I1128 00:43:35.548790   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has current primary IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.549194   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.549223   45815 main.go:141] libmachine: (no-preload-473615) DBG | skip adding static IP to network mk-no-preload-473615 - found existing host DHCP lease matching {name: "no-preload-473615", mac: "52:54:00:bb:93:0d", ip: "192.168.61.195"}
	I1128 00:43:35.549238   45815 main.go:141] libmachine: (no-preload-473615) Reserved static IP address: 192.168.61.195
	I1128 00:43:35.549253   45815 main.go:141] libmachine: (no-preload-473615) Waiting for SSH to be available...
	I1128 00:43:35.549265   45815 main.go:141] libmachine: (no-preload-473615) DBG | Getting to WaitForSSH function...
	I1128 00:43:35.551246   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551573   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.551601   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.551757   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH client type: external
	I1128 00:43:35.551778   45815 main.go:141] libmachine: (no-preload-473615) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa (-rw-------)
	I1128 00:43:35.551811   45815 main.go:141] libmachine: (no-preload-473615) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:35.551831   45815 main.go:141] libmachine: (no-preload-473615) DBG | About to run SSH command:
	I1128 00:43:35.551867   45815 main.go:141] libmachine: (no-preload-473615) DBG | exit 0
	I1128 00:43:35.636291   45815 main.go:141] libmachine: (no-preload-473615) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:35.636667   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetConfigRaw
	I1128 00:43:35.637278   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.639799   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640164   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.640209   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.640423   45815 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/config.json ...
	I1128 00:43:35.640598   45815 machine.go:88] provisioning docker machine ...
	I1128 00:43:35.640632   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:35.640853   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641071   45815 buildroot.go:166] provisioning hostname "no-preload-473615"
	I1128 00:43:35.641090   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.641242   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.643554   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643845   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.643905   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.643977   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.644140   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.644370   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.644540   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.644971   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.644986   45815 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-473615 && echo "no-preload-473615" | sudo tee /etc/hostname
	I1128 00:43:35.766635   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-473615
	
	I1128 00:43:35.766689   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.769704   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770068   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.770108   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.770279   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:35.770463   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770622   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:35.770733   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:35.770849   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:35.771214   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:35.771235   45815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-473615' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-473615/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-473615' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:35.889378   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:35.889416   45815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:35.889480   45815 buildroot.go:174] setting up certificates
	I1128 00:43:35.889494   45815 provision.go:83] configureAuth start
	I1128 00:43:35.889506   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetMachineName
	I1128 00:43:35.889810   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:35.892924   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893313   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.893359   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.893477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:35.895759   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896140   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:35.896169   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:35.896281   45815 provision.go:138] copyHostCerts
	I1128 00:43:35.896345   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:35.896370   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:35.896448   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:35.896565   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:35.896577   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:35.896618   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:35.896713   45815 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:35.896728   45815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:35.896778   45815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:35.896856   45815 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.no-preload-473615 san=[192.168.61.195 192.168.61.195 localhost 127.0.0.1 minikube no-preload-473615]
	I1128 00:43:36.080367   45815 provision.go:172] copyRemoteCerts
	I1128 00:43:36.080429   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:36.080451   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.082989   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083327   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.083358   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.083529   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.083745   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.083927   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.084077   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.166338   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:36.191867   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 00:43:36.214184   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:36.237102   45815 provision.go:86] duration metric: configureAuth took 347.594627ms
	I1128 00:43:36.237135   45815 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:36.237338   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:43:36.237421   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.240408   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240787   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.240826   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.240995   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.241193   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241368   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.241539   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.241712   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.242000   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.242016   45815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:36.565582   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:36.565609   45815 machine.go:91] provisioned docker machine in 924.985826ms
	I1128 00:43:36.565623   45815 start.go:300] post-start starting for "no-preload-473615" (driver="kvm2")
	I1128 00:43:36.565649   45815 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:36.565677   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.565994   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:36.566025   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.568653   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569032   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.569064   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.569148   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.569337   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.569502   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.569678   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.655695   45815 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:36.659909   45815 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:36.659941   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:36.660020   45815 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:36.660108   45815 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:36.660228   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:36.669575   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:36.690970   45815 start.go:303] post-start completed in 125.33198ms
	I1128 00:43:36.690998   45815 fix.go:56] fixHost completed within 19.708998537s
	I1128 00:43:36.691022   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.693929   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694361   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.694400   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.694665   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.694877   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695064   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.695237   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.695404   45815 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:36.695738   45815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.195 22 <nil> <nil>}
	I1128 00:43:36.695750   45815 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:36.805602   45815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132216.779589412
	
	I1128 00:43:36.805626   45815 fix.go:206] guest clock: 1701132216.779589412
	I1128 00:43:36.805637   45815 fix.go:219] Guest: 2023-11-28 00:43:36.779589412 +0000 UTC Remote: 2023-11-28 00:43:36.691003095 +0000 UTC m=+237.986754258 (delta=88.586317ms)
	I1128 00:43:36.805673   45815 fix.go:190] guest clock delta is within tolerance: 88.586317ms
	I1128 00:43:36.805678   45815 start.go:83] releasing machines lock for "no-preload-473615", held for 19.823720426s
	I1128 00:43:36.805705   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.805989   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:36.808864   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809316   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.809346   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.809529   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810162   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810361   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:43:36.810441   45815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:36.810494   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.810824   45815 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:36.810845   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:43:36.813747   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.813979   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814064   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814102   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814263   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814444   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:36.814471   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:36.814508   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814659   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:43:36.814764   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.814844   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:43:36.814913   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.815484   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:43:36.815640   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:43:36.923054   45815 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:36.930078   45815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:37.082251   45815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:37.088817   45815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:37.088890   45815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:37.110921   45815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:37.110950   45815 start.go:472] detecting cgroup driver to use...
	I1128 00:43:37.111017   45815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:37.128450   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:37.144814   45815 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:37.144875   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:37.158185   45815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:37.170311   45815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:37.287910   45815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:37.414142   45815 docker.go:219] disabling docker service ...
	I1128 00:43:37.414222   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:37.427085   45815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:37.438631   45815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:37.559028   45815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:37.676646   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:37.689214   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:37.709298   45815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:37.709370   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.718368   45815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:37.718446   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.727188   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.736230   45815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:37.745594   45815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:37.755149   45815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:37.763179   45815 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:37.763237   45815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:37.780091   45815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:37.790861   45815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:37.923396   45815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:38.133933   45815 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:38.134013   45815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:38.143538   45815 start.go:540] Will wait 60s for crictl version
	I1128 00:43:38.143598   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.149212   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:38.205988   45815 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:38.206079   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.261211   45815 ssh_runner.go:195] Run: crio --version
	I1128 00:43:38.315398   45815 out.go:177] * Preparing Kubernetes v1.29.0-rc.0 on CRI-O 1.24.1 ...
	I1128 00:43:38.317052   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetIP
	I1128 00:43:38.320262   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320708   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:43:38.320736   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:43:38.320976   45815 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:38.325437   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:38.337411   45815 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 00:43:38.337457   45815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:38.384218   45815 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.0". assuming images are not preloaded.
	I1128 00:43:38.384245   45815 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.0 registry.k8s.io/kube-controller-manager:v1.29.0-rc.0 registry.k8s.io/kube-scheduler:v1.29.0-rc.0 registry.k8s.io/kube-proxy:v1.29.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:43:38.384325   45815 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.384533   45815 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.384553   45815 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1128 00:43:38.384634   45815 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.384726   45815 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.384817   45815 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.384870   45815 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.384931   45815 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.386318   45815 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:38.386368   45815 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1128 00:43:38.386381   45815 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.386373   45815 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.386324   45815 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.386316   45815 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.386319   45815 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.386326   45815 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.526945   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.527246   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1128 00:43:38.538042   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.538097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.539522   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.549538   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.557097   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.621381   45815 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.0" does not exist at hash "4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9" in container runtime
	I1128 00:43:38.621440   45815 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.621516   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.208059   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting to get IP...
	I1128 00:43:38.209168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209599   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.209688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.209572   46749 retry.go:31] will retry after 256.562292ms: waiting for machine to come up
	I1128 00:43:38.468199   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468798   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.468828   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.468722   46749 retry.go:31] will retry after 287.91937ms: waiting for machine to come up
	I1128 00:43:38.758157   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758610   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:38.758640   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:38.758555   46749 retry.go:31] will retry after 377.696379ms: waiting for machine to come up
	I1128 00:43:39.138269   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138761   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.138795   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.138706   46749 retry.go:31] will retry after 476.093256ms: waiting for machine to come up
	I1128 00:43:39.616256   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616611   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:39.616638   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:39.616577   46749 retry.go:31] will retry after 628.654941ms: waiting for machine to come up
	I1128 00:43:40.246993   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247498   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.247543   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.247455   46749 retry.go:31] will retry after 607.981973ms: waiting for machine to come up
	I1128 00:43:40.857220   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857634   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:40.857663   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:40.857592   46749 retry.go:31] will retry after 866.108704ms: waiting for machine to come up
	I1128 00:43:41.725140   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725695   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:41.725716   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:41.725609   46749 retry.go:31] will retry after 1.158669064s: waiting for machine to come up
	I1128 00:43:37.777663   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.028441   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.028478   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.028492   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.043818   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:43:42.043846   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:43:42.544532   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:42.551469   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:42.551505   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.044055   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.050233   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:43:43.050262   45580 api_server.go:103] status: https://192.168.50.93:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:43:43.544857   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:43:43.550155   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:43:43.558929   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:43:43.558962   45580 api_server.go:131] duration metric: took 5.781308354s to wait for apiserver health ...
	I1128 00:43:43.558974   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:43:43.558984   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:43.560872   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:43:38.775724   45815 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1128 00:43:38.775776   45815 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.775827   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.775953   45815 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1128 00:43:38.776035   45815 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.0" does not exist at hash "e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7" in container runtime
	I1128 00:43:38.776059   45815 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.776106   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776188   45815 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.0" does not exist at hash "e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4" in container runtime
	I1128 00:43:38.776220   45815 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.776247   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776315   45815 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.0" does not exist at hash "df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55" in container runtime
	I1128 00:43:38.776335   45815 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.776360   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.776456   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.0
	I1128 00:43:38.776562   45815 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.776601   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:38.792457   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.0
	I1128 00:43:38.792533   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.0
	I1128 00:43:38.792584   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.0
	I1128 00:43:38.792634   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1128 00:43:38.792714   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1128 00:43:38.929517   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.929640   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.941438   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941544   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:38.941623   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.941704   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:38.964773   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1128 00:43:38.964890   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:38.964980   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965038   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:38.965118   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1128 00:43:38.965175   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:38.965250   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0 (exists)
	I1128 00:43:38.965262   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.965291   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0
	I1128 00:43:38.970386   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1128 00:43:38.970443   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0 (exists)
	I1128 00:43:38.970458   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0 (exists)
	I1128 00:43:38.974722   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1128 00:43:38.974970   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0 (exists)
	I1128 00:43:39.286976   45815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143462   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.0: (2.178138495s)
	I1128 00:43:41.143491   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.0 from cache
	I1128 00:43:41.143520   45815 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143536   45815 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.856517641s)
	I1128 00:43:41.143563   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1128 00:43:41.143596   45815 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1128 00:43:41.143630   45815 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:41.143678   45815 ssh_runner.go:195] Run: which crictl
	I1128 00:43:43.335836   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.192246706s)
	I1128 00:43:43.335894   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1128 00:43:43.335859   45815 ssh_runner.go:235] Completed: which crictl: (2.192168329s)
	I1128 00:43:43.335938   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335970   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0
	I1128 00:43:43.335971   45815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:43:42.886014   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886540   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:42.886564   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:42.886457   46749 retry.go:31] will retry after 1.698662705s: waiting for machine to come up
	I1128 00:43:44.586452   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586892   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:44.586917   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:44.586848   46749 retry.go:31] will retry after 1.681392058s: waiting for machine to come up
	I1128 00:43:46.270022   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270545   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:46.270578   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:46.270491   46749 retry.go:31] will retry after 2.061464637s: waiting for machine to come up
	I1128 00:43:43.562274   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:43:43.583729   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:43:43.614704   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:43:43.627543   45580 system_pods.go:59] 8 kube-system pods found
	I1128 00:43:43.627587   45580 system_pods.go:61] "coredns-5dd5756b68-crmfq" [e412b41a-a4a4-4c8c-8fe9-b96c52e5815c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:43:43.627602   45580 system_pods.go:61] "etcd-embed-certs-304541" [ceeea55a-ffbb-4c18-b563-3552f8d47f3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:43:43.627622   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [e7bd6f60-fe90-4413-b906-0101ad3bda9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:43:43.627632   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [e083fd78-3aad-44ed-8bac-fc72eeded7f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:43:43.627652   45580 system_pods.go:61] "kube-proxy-6d4rt" [bc801fd6-e726-41d3-afcf-5ed86723dca9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:43:43.627665   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [df10b58f-43ec-4492-8d95-0d91ee88fec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:43:43.627676   45580 system_pods.go:61] "metrics-server-57f55c9bc5-sx4m7" [1618a041-6077-4076-8178-f2692dc983b0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:43:43.627686   45580 system_pods.go:61] "storage-provisioner" [acaed13d-b10c-4fb6-b2b7-452cf928e1e5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:43:43.627696   45580 system_pods.go:74] duration metric: took 12.96707ms to wait for pod list to return data ...
	I1128 00:43:43.627709   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:43:43.632593   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:43:43.632628   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:43:43.632642   45580 node_conditions.go:105] duration metric: took 4.924217ms to run NodePressure ...
	I1128 00:43:43.632667   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:43:43.945692   45580 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950639   45580 kubeadm.go:787] kubelet initialised
	I1128 00:43:43.950666   45580 kubeadm.go:788] duration metric: took 4.940609ms waiting for restarted kubelet to initialise ...
	I1128 00:43:43.950677   45580 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:43:43.956229   45580 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:45.975328   45580 pod_ready.go:102] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:46.036655   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.0: (2.700640635s)
	I1128 00:43:46.036696   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.0 from cache
	I1128 00:43:46.036721   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036786   45815 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.700708537s)
	I1128 00:43:46.036846   45815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1128 00:43:46.036792   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0
	I1128 00:43:46.036943   45815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:48.418287   45815 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.381312759s)
	I1128 00:43:48.418326   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.0: (2.381419374s)
	I1128 00:43:48.418339   45815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1128 00:43:48.418346   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.0 from cache
	I1128 00:43:48.418370   45815 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.418426   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1128 00:43:48.333973   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:48.334509   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:48.334432   46749 retry.go:31] will retry after 3.421790433s: waiting for machine to come up
	I1128 00:43:51.757991   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | unable to find current IP address of domain default-k8s-diff-port-488423 in network mk-default-k8s-diff-port-488423
	I1128 00:43:51.758505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | I1128 00:43:51.758448   46749 retry.go:31] will retry after 3.726327818s: waiting for machine to come up
	I1128 00:43:48.484870   45580 pod_ready.go:92] pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:48.484903   45580 pod_ready.go:81] duration metric: took 4.52864781s waiting for pod "coredns-5dd5756b68-crmfq" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:48.484916   45580 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006488   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.006516   45580 pod_ready.go:81] duration metric: took 521.591023ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.006528   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014231   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:49.014258   45580 pod_ready.go:81] duration metric: took 7.721879ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:49.014270   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:51.284611   45580 pod_ready.go:102] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:52.636848   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.218389263s)
	I1128 00:43:52.636883   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1128 00:43:52.636912   45815 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:52.636964   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0
	I1128 00:43:56.745904   45269 start.go:369] acquired machines lock for "old-k8s-version-732472" in 56.827856444s
	I1128 00:43:56.745949   45269 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:43:56.745959   45269 fix.go:54] fixHost starting: 
	I1128 00:43:56.746379   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:43:56.746447   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:43:56.764386   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I1128 00:43:56.764907   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:43:56.765554   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:43:56.765584   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:43:56.766037   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:43:56.766221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:43:56.766365   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:43:56.768054   45269 fix.go:102] recreateIfNeeded on old-k8s-version-732472: state=Stopped err=<nil>
	I1128 00:43:56.768082   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	W1128 00:43:56.768219   45269 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:43:56.771618   45269 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-732472" ...
	I1128 00:43:55.486531   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487099   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Found IP for machine: 192.168.72.242
	I1128 00:43:55.487128   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserving static IP address...
	I1128 00:43:55.487158   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has current primary IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.487539   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.487574   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | skip adding static IP to network mk-default-k8s-diff-port-488423 - found existing host DHCP lease matching {name: "default-k8s-diff-port-488423", mac: "52:54:00:4c:3b:25", ip: "192.168.72.242"}
	I1128 00:43:55.487595   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Reserved static IP address: 192.168.72.242
	I1128 00:43:55.487609   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Waiting for SSH to be available...
	I1128 00:43:55.487622   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Getting to WaitForSSH function...
	I1128 00:43:55.489858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490219   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.490253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.490324   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH client type: external
	I1128 00:43:55.490373   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa (-rw-------)
	I1128 00:43:55.490414   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:43:55.490431   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | About to run SSH command:
	I1128 00:43:55.490447   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | exit 0
	I1128 00:43:55.584551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | SSH cmd err, output: <nil>: 
	I1128 00:43:55.584987   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetConfigRaw
	I1128 00:43:55.585628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.588444   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.588889   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.588924   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.589207   46126 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/config.json ...
	I1128 00:43:55.589475   46126 machine.go:88] provisioning docker machine ...
	I1128 00:43:55.589501   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:55.589744   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590007   46126 buildroot.go:166] provisioning hostname "default-k8s-diff-port-488423"
	I1128 00:43:55.590031   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.590203   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.592733   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593136   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.593170   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.593313   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.593480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593628   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.593756   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.593918   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.594316   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.594333   46126 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-488423 && echo "default-k8s-diff-port-488423" | sudo tee /etc/hostname
	I1128 00:43:55.739338   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-488423
	
	I1128 00:43:55.739368   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.742483   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.742870   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.742906   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.743009   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.743215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743365   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.743512   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.743669   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:55.744119   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:55.744140   46126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-488423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-488423/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-488423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:43:55.883117   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:43:55.883146   46126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:43:55.883185   46126 buildroot.go:174] setting up certificates
	I1128 00:43:55.883198   46126 provision.go:83] configureAuth start
	I1128 00:43:55.883216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetMachineName
	I1128 00:43:55.883566   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:55.886292   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886625   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.886652   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.886796   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.888873   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889213   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.889233   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.889347   46126 provision.go:138] copyHostCerts
	I1128 00:43:55.889401   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:43:55.889413   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:43:55.889478   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:43:55.889611   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:43:55.889623   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:43:55.889650   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:43:55.889729   46126 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:43:55.889738   46126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:43:55.889765   46126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:43:55.889848   46126 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-488423 san=[192.168.72.242 192.168.72.242 localhost 127.0.0.1 minikube default-k8s-diff-port-488423]
	I1128 00:43:55.945434   46126 provision.go:172] copyRemoteCerts
	I1128 00:43:55.945516   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:43:55.945547   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:55.948894   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949387   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:55.949422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:55.949800   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:55.950023   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:55.950215   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:55.950367   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.045647   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:43:56.069972   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1128 00:43:56.093947   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 00:43:56.118840   46126 provision.go:86] duration metric: configureAuth took 235.628083ms
	I1128 00:43:56.118867   46126 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:43:56.119072   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:43:56.119159   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.122135   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122514   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.122550   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.122680   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.122884   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123076   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.123253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.123418   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.123729   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.123746   46126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:43:56.476330   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:43:56.476360   46126 machine.go:91] provisioned docker machine in 886.868182ms
	I1128 00:43:56.476384   46126 start.go:300] post-start starting for "default-k8s-diff-port-488423" (driver="kvm2")
	I1128 00:43:56.476399   46126 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:43:56.476422   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.476787   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:43:56.476824   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.479803   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480168   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.480208   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.480342   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.480542   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.480729   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.480901   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.574040   46126 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:43:56.578163   46126 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:43:56.578186   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:43:56.578247   46126 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:43:56.578339   46126 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:43:56.578455   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:43:56.586455   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.613452   46126 start.go:303] post-start completed in 137.050871ms
	I1128 00:43:56.613484   46126 fix.go:56] fixHost completed within 19.807643021s
	I1128 00:43:56.613510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.616834   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617216   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.617253   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.617478   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.617686   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.617859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.618105   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.618302   46126 main.go:141] libmachine: Using SSH client type: native
	I1128 00:43:56.618618   46126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.242 22 <nil> <nil>}
	I1128 00:43:56.618630   46126 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:43:56.745691   46126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132236.690190729
	
	I1128 00:43:56.745711   46126 fix.go:206] guest clock: 1701132236.690190729
	I1128 00:43:56.745731   46126 fix.go:219] Guest: 2023-11-28 00:43:56.690190729 +0000 UTC Remote: 2023-11-28 00:43:56.613489194 +0000 UTC m=+194.421672716 (delta=76.701535ms)
	I1128 00:43:56.745784   46126 fix.go:190] guest clock delta is within tolerance: 76.701535ms
	I1128 00:43:56.745798   46126 start.go:83] releasing machines lock for "default-k8s-diff-port-488423", held for 19.939986738s
	I1128 00:43:56.745837   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.746091   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:56.749097   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749453   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.749486   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.749648   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750192   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750392   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:43:56.750446   46126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:43:56.750493   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.750661   46126 ssh_runner.go:195] Run: cat /version.json
	I1128 00:43:56.750685   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:43:56.753480   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753655   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.753948   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.753976   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754096   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754163   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:56.754191   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:56.754241   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754327   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:43:56.754474   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754489   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:43:56.754621   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.754644   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:43:56.754779   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:43:56.850794   46126 ssh_runner.go:195] Run: systemctl --version
	I1128 00:43:56.872044   46126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:43:57.016328   46126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:43:57.022389   46126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:43:57.022463   46126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:43:57.039925   46126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:43:57.039959   46126 start.go:472] detecting cgroup driver to use...
	I1128 00:43:57.040030   46126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:43:57.056385   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:43:57.068344   46126 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:43:57.068413   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:43:57.081752   46126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:43:57.095169   46126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:43:57.192392   46126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:43:56.772995   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Start
	I1128 00:43:56.773150   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring networks are active...
	I1128 00:43:56.774032   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network default is active
	I1128 00:43:56.774327   45269 main.go:141] libmachine: (old-k8s-version-732472) Ensuring network mk-old-k8s-version-732472 is active
	I1128 00:43:56.774732   45269 main.go:141] libmachine: (old-k8s-version-732472) Getting domain xml...
	I1128 00:43:56.775433   45269 main.go:141] libmachine: (old-k8s-version-732472) Creating domain...
	I1128 00:43:53.781169   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.781193   45580 pod_ready.go:81] duration metric: took 4.766915226s waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.781203   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789370   45580 pod_ready.go:92] pod "kube-proxy-6d4rt" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.789400   45580 pod_ready.go:81] duration metric: took 8.189391ms waiting for pod "kube-proxy-6d4rt" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.789412   45580 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794277   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:43:53.794299   45580 pod_ready.go:81] duration metric: took 4.87905ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:53.794307   45580 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	I1128 00:43:55.984645   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:57.310000   46126 docker.go:219] disabling docker service ...
	I1128 00:43:57.310066   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:43:57.324484   46126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:43:57.339752   46126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:43:57.444051   46126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:43:57.557773   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:43:57.571662   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:43:57.591169   46126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 00:43:57.591230   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.605399   46126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:43:57.605462   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.617783   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.629258   46126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:43:57.639844   46126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:43:57.651810   46126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:43:57.663353   46126 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:43:57.663403   46126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:43:57.679095   46126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:43:57.688096   46126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:43:57.795868   46126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:43:57.970597   46126 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:43:57.970661   46126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:43:57.975830   46126 start.go:540] Will wait 60s for crictl version
	I1128 00:43:57.975900   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:43:57.980469   46126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:43:58.022819   46126 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:43:58.022932   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.078060   46126 ssh_runner.go:195] Run: crio --version
	I1128 00:43:58.130219   46126 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 00:43:55.298307   45815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.0: (2.661319898s)
	I1128 00:43:55.298330   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.0 from cache
	I1128 00:43:55.298358   45815 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:55.298411   45815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1128 00:43:56.256987   45815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1128 00:43:56.257041   45815 cache_images.go:123] Successfully loaded all cached images
	I1128 00:43:56.257048   45815 cache_images.go:92] LoadImages completed in 17.872790347s
	I1128 00:43:56.257142   45815 ssh_runner.go:195] Run: crio config
	I1128 00:43:56.342206   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:43:56.342230   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:43:56.342248   45815 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:43:56.342265   45815 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.195 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-473615 NodeName:no-preload-473615 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:43:56.342421   45815 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-473615"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:43:56.342519   45815 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-473615 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:43:56.342581   45815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.0
	I1128 00:43:56.352200   45815 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:43:56.352275   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:43:56.360863   45815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1128 00:43:56.378620   45815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1128 00:43:56.396120   45815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1128 00:43:56.415090   45815 ssh_runner.go:195] Run: grep 192.168.61.195	control-plane.minikube.internal$ /etc/hosts
	I1128 00:43:56.419072   45815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:56.434497   45815 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615 for IP: 192.168.61.195
	I1128 00:43:56.434534   45815 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:43:56.434702   45815 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:43:56.434766   45815 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:43:56.434899   45815 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.key
	I1128 00:43:56.434990   45815 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key.6c770a2d
	I1128 00:43:56.435043   45815 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key
	I1128 00:43:56.435190   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:43:56.435231   45815 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:43:56.435249   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:43:56.435280   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:43:56.435317   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:43:56.435348   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:43:56.435402   45815 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:43:56.436170   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:43:56.464712   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:43:56.492394   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:43:56.517331   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:43:56.540656   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:43:56.562997   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:43:56.587574   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:43:56.614358   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:43:56.640027   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:43:56.666632   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:43:56.690925   45815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:43:56.716816   45815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:43:56.734079   45815 ssh_runner.go:195] Run: openssl version
	I1128 00:43:56.739942   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:43:56.751230   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757607   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.757662   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:43:56.764184   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:43:56.777196   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:43:56.788408   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793610   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.793667   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:43:56.799203   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:43:56.809821   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:43:56.820489   45815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825268   45815 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.825335   45815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:43:56.830869   45815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:43:56.843707   45815 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:43:56.848717   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:43:56.855268   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:43:56.861889   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:43:56.867773   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:43:56.874642   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:43:56.882143   45815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:43:56.889812   45815 kubeadm.go:404] StartCluster: {Name:no-preload-473615 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.0 ClusterName:no-preload-473615 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:43:56.889969   45815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:43:56.890021   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:43:56.931994   45815 cri.go:89] found id: ""
	I1128 00:43:56.932061   45815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:43:56.941996   45815 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:43:56.942014   45815 kubeadm.go:636] restartCluster start
	I1128 00:43:56.942074   45815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:43:56.950854   45815 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.951919   45815 kubeconfig.go:92] found "no-preload-473615" server: "https://192.168.61.195:8443"
	I1128 00:43:56.954777   45815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:43:56.963839   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.963902   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.974803   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:56.974821   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:56.974869   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:56.989023   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.489949   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.490022   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:57.501695   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:57.989930   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:57.990014   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.002435   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.489856   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.489946   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:58.506641   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:58.131523   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetIP
	I1128 00:43:58.134378   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.134826   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:43:58.134859   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:43:58.135087   46126 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 00:43:58.139363   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:43:58.151488   46126 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 00:43:58.151552   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:43:58.193551   46126 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 00:43:58.193618   46126 ssh_runner.go:195] Run: which lz4
	I1128 00:43:58.197624   46126 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:43:58.201842   46126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:43:58.201875   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 00:44:00.068140   46126 crio.go:444] Took 1.870561 seconds to copy over tarball
	I1128 00:44:00.068221   46126 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:43:58.122924   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting to get IP...
	I1128 00:43:58.123826   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.124165   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.124263   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.124146   46882 retry.go:31] will retry after 249.216665ms: waiting for machine to come up
	I1128 00:43:58.374969   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.375510   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.375537   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.375457   46882 retry.go:31] will retry after 317.223146ms: waiting for machine to come up
	I1128 00:43:58.694027   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:58.694483   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:58.694535   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:58.694443   46882 retry.go:31] will retry after 362.880377ms: waiting for machine to come up
	I1128 00:43:59.058976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.059623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.059650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.059571   46882 retry.go:31] will retry after 545.497342ms: waiting for machine to come up
	I1128 00:43:59.606962   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:43:59.607607   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:43:59.607633   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:43:59.607558   46882 retry.go:31] will retry after 678.467206ms: waiting for machine to come up
	I1128 00:44:00.287531   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:00.288062   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:00.288103   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:00.288054   46882 retry.go:31] will retry after 817.7633ms: waiting for machine to come up
	I1128 00:44:01.107179   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:01.107748   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:01.107776   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:01.107690   46882 retry.go:31] will retry after 1.02533736s: waiting for machine to come up
	I1128 00:44:02.134384   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:02.134940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:02.134972   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:02.134867   46882 retry.go:31] will retry after 1.291264059s: waiting for machine to come up
	I1128 00:43:58.491595   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:00.983179   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:43:58.989453   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:58.989568   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.006339   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.489912   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.490007   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:43:59.505297   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:43:59.989924   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:43:59.990020   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.004118   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.489346   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.489421   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:00.504026   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:00.989739   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:00.989828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.006279   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.489872   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.489975   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:01.504734   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:01.989185   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:01.989269   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.000313   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.489165   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.489246   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:02.505444   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:02.989956   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:02.990024   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.003038   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.489556   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.489663   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:03.502192   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.282407   46126 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.2141625s)
	I1128 00:44:03.282432   46126 crio.go:451] Took 3.214263 seconds to extract the tarball
	I1128 00:44:03.282440   46126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:03.324470   46126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:03.375858   46126 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 00:44:03.375881   46126 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:44:03.375944   46126 ssh_runner.go:195] Run: crio config
	I1128 00:44:03.440441   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:03.440462   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:03.440479   46126 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:03.440496   46126 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.242 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-488423 NodeName:default-k8s-diff-port-488423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:44:03.440670   46126 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.242
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-488423"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:03.440746   46126 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-488423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1128 00:44:03.440830   46126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:44:03.450060   46126 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:03.450138   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:03.458748   46126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1128 00:44:03.475315   46126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:03.492886   46126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1128 00:44:03.509665   46126 ssh_runner.go:195] Run: grep 192.168.72.242	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:03.513441   46126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:03.527336   46126 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423 for IP: 192.168.72.242
	I1128 00:44:03.527373   46126 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:03.527539   46126 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:03.527592   46126 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:03.527690   46126 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.key
	I1128 00:44:03.527770   46126 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key.05574f60
	I1128 00:44:03.527827   46126 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key
	I1128 00:44:03.527966   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:03.528009   46126 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:03.528024   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:03.528062   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:03.528098   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:03.528133   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:03.528188   46126 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:03.528787   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:03.553210   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:03.578548   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:03.604661   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:03.627640   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:03.653147   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:03.681991   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:03.706068   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:03.730092   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:03.751326   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:03.776165   46126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:03.801844   46126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:03.819762   46126 ssh_runner.go:195] Run: openssl version
	I1128 00:44:03.826895   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:03.836806   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842921   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.842983   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:03.848802   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:03.859065   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:03.869720   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874600   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.874670   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:03.880712   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:03.891524   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:03.901286   46126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906102   46126 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.906163   46126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:03.911563   46126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:03.921606   46126 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:03.926553   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:03.932640   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:03.938482   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:03.944483   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:03.950430   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:03.956197   46126 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:03.962543   46126 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-488423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-488423 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:03.962647   46126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:03.962700   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:04.014418   46126 cri.go:89] found id: ""
	I1128 00:44:04.014499   46126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:04.024132   46126 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:04.024178   46126 kubeadm.go:636] restartCluster start
	I1128 00:44:04.024239   46126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:04.032856   46126 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.034010   46126 kubeconfig.go:92] found "default-k8s-diff-port-488423" server: "https://192.168.72.242:8444"
	I1128 00:44:04.036458   46126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:04.044461   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.044513   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.054697   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.054714   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.054759   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.066995   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.567687   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.567784   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.579528   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.067882   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.067970   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.082579   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.568116   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.568240   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.579606   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.067125   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.067229   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.078637   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.567159   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.567258   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.578623   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:07.067770   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.067864   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.081883   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:03.427919   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:03.428413   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:03.428442   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:03.428350   46882 retry.go:31] will retry after 1.150784696s: waiting for machine to come up
	I1128 00:44:04.580519   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:04.580976   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:04.581008   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:04.580941   46882 retry.go:31] will retry after 1.981268381s: waiting for machine to come up
	I1128 00:44:06.564123   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:06.564623   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:06.564641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:06.564596   46882 retry.go:31] will retry after 2.79895226s: waiting for machine to come up
	I1128 00:44:02.984445   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:05.483562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:03.989899   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:03.995828   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.009197   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.489749   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.489829   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:04.501445   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:04.989934   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:04.990019   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.004077   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.489549   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.489634   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:05.501227   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:05.989858   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:05.989940   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.003151   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.489699   45815 api_server.go:166] Checking apiserver status ...
	I1128 00:44:06.489785   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:06.502937   45815 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:06.964667   45815 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:06.964705   45815 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:06.964720   45815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:06.964808   45815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:07.008487   45815 cri.go:89] found id: ""
	I1128 00:44:07.008572   45815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:07.028576   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:07.040057   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:07.040130   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050063   45815 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:07.050085   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:07.199305   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.265283   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.065924411s)
	I1128 00:44:08.265324   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.468254   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.570027   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:08.650823   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:08.650900   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:08.667640   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:07.567667   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:07.567751   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:07.580778   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.067282   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.067368   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.080618   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:08.567146   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:08.567232   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:08.580324   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.067606   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.067728   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.083426   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.567987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:09.568084   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:09.579657   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.067205   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.067292   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.082466   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:10.568064   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:10.568159   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:10.583356   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.067987   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.068114   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.084486   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:11.567945   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:11.568057   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:11.583108   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:12.068099   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.068186   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.079172   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:09.366118   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:09.366642   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:09.366677   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:09.366580   46882 retry.go:31] will retry after 2.538437833s: waiting for machine to come up
	I1128 00:44:11.906292   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:11.906799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | unable to find current IP address of domain old-k8s-version-732472 in network mk-old-k8s-version-732472
	I1128 00:44:11.906823   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | I1128 00:44:11.906751   46882 retry.go:31] will retry after 4.351501946s: waiting for machine to come up
	I1128 00:44:07.983966   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.985333   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:12.483805   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:09.182449   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:09.681686   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.181905   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:10.681692   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.181652   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:11.209900   45815 api_server.go:72] duration metric: took 2.559073582s to wait for apiserver process to appear ...
	I1128 00:44:11.209935   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:11.209954   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.242230   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.242261   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.242276   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.285509   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:15.285538   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:15.786232   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:15.791529   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:15.791565   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.285909   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.290996   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:16.291040   45815 api_server.go:103] status: https://192.168.61.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:16.786199   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:44:16.792488   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:44:16.805778   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:44:16.805807   45815 api_server.go:131] duration metric: took 5.595863517s to wait for apiserver health ...
	I1128 00:44:16.805817   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:44:16.805825   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:16.807924   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:12.567969   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:12.568085   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:12.579496   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.068092   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.068164   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.079081   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:13.567677   46126 api_server.go:166] Checking apiserver status ...
	I1128 00:44:13.567773   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:13.579000   46126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:14.044782   46126 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:14.044818   46126 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:14.044832   46126 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:14.044927   46126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:14.090411   46126 cri.go:89] found id: ""
	I1128 00:44:14.090487   46126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:14.106216   46126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:14.116309   46126 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:14.116367   46126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125060   46126 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:14.125082   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.259194   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:14.923712   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.113501   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.221455   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:15.317171   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:15.317269   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.332625   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:15.847268   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.347347   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.847441   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:16.259741   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260326   45269 main.go:141] libmachine: (old-k8s-version-732472) Found IP for machine: 192.168.39.172
	I1128 00:44:16.260347   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserving static IP address...
	I1128 00:44:16.260368   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has current primary IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.260940   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.260978   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | skip adding static IP to network mk-old-k8s-version-732472 - found existing host DHCP lease matching {name: "old-k8s-version-732472", mac: "52:54:00:ff:2b:fd", ip: "192.168.39.172"}
	I1128 00:44:16.261003   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Getting to WaitForSSH function...
	I1128 00:44:16.261021   45269 main.go:141] libmachine: (old-k8s-version-732472) Reserved static IP address: 192.168.39.172
	I1128 00:44:16.261037   45269 main.go:141] libmachine: (old-k8s-version-732472) Waiting for SSH to be available...
	I1128 00:44:16.264000   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264370   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.264402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.264496   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH client type: external
	I1128 00:44:16.264560   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Using SSH private key: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa (-rw-------)
	I1128 00:44:16.264600   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 00:44:16.264624   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | About to run SSH command:
	I1128 00:44:16.264641   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | exit 0
	I1128 00:44:16.373651   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | SSH cmd err, output: <nil>: 
	I1128 00:44:16.374185   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetConfigRaw
	I1128 00:44:16.374992   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.378530   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.378958   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.378987   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.379390   45269 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/config.json ...
	I1128 00:44:16.379622   45269 machine.go:88] provisioning docker machine ...
	I1128 00:44:16.379646   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:16.379854   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380005   45269 buildroot.go:166] provisioning hostname "old-k8s-version-732472"
	I1128 00:44:16.380024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.380152   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.382908   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383346   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.383376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.383604   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.383824   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384024   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.384179   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.384365   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.384875   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.384902   45269 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-732472 && echo "old-k8s-version-732472" | sudo tee /etc/hostname
	I1128 00:44:16.547302   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-732472
	
	I1128 00:44:16.547378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.550883   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551409   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.551448   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.551634   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:16.551888   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552113   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:16.552258   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:16.552465   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:16.552965   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:16.552994   45269 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-732472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-732472/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-732472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:44:16.705539   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:44:16.705577   45269 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17206-4749/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-4749/.minikube}
	I1128 00:44:16.705601   45269 buildroot.go:174] setting up certificates
	I1128 00:44:16.705611   45269 provision.go:83] configureAuth start
	I1128 00:44:16.705622   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetMachineName
	I1128 00:44:16.705962   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:16.708726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709231   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.709283   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.709531   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:16.712023   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712491   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:16.712524   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:16.712658   45269 provision.go:138] copyHostCerts
	I1128 00:44:16.712720   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem, removing ...
	I1128 00:44:16.712734   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem
	I1128 00:44:16.712835   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/ca.pem (1078 bytes)
	I1128 00:44:16.712990   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem, removing ...
	I1128 00:44:16.713005   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem
	I1128 00:44:16.713041   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/cert.pem (1123 bytes)
	I1128 00:44:16.713154   45269 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem, removing ...
	I1128 00:44:16.713168   45269 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem
	I1128 00:44:16.713201   45269 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-4749/.minikube/key.pem (1679 bytes)
	I1128 00:44:16.713291   45269 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-732472 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube old-k8s-version-732472]
	I1128 00:44:17.255079   45269 provision.go:172] copyRemoteCerts
	I1128 00:44:17.255157   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:44:17.255184   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.258078   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258486   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.258522   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.258704   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.258892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.259071   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.259278   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.360891   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 00:44:14.981992   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.984334   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:16.809569   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:16.837545   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:16.884377   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:16.901252   45815 system_pods.go:59] 9 kube-system pods found
	I1128 00:44:16.901296   45815 system_pods.go:61] "coredns-76f75df574-54p94" [fc2580d3-8c03-46c8-aa43-fce9472a4bbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901310   45815 system_pods.go:61] "coredns-76f75df574-9ptz7" [c51a1796-37bb-411b-8477-fb4d8c7e7cb2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:16.901322   45815 system_pods.go:61] "etcd-no-preload-473615" [c789418f-23b1-4e84-95df-e339afc358e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:16.901337   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [204c5f02-7e14-4761-9af0-606f227dee63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:16.901351   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [2d96a78f-b0c9-4731-a8a1-ec63459a09ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:16.901368   45815 system_pods.go:61] "kube-proxy-trr4j" [df593d3d-db4c-45f9-ad79-f35fe2cdef84] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:16.901379   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [5fe2c87b-af8b-4184-8b62-399e488dcb5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:16.901393   45815 system_pods.go:61] "metrics-server-57f55c9bc5-lh4m8" [4c3ae55b-befb-44d2-8982-592acdf3eab9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:16.901408   45815 system_pods.go:61] "storage-provisioner" [a3e71dd4-570e-4895-aac4-d98dfbd69a6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:16.901423   45815 system_pods.go:74] duration metric: took 17.023663ms to wait for pod list to return data ...
	I1128 00:44:16.901434   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:16.905738   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:16.905766   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:16.905776   45815 node_conditions.go:105] duration metric: took 4.335236ms to run NodePressure ...
	I1128 00:44:16.905791   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:17.532813   45815 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548788   45815 kubeadm.go:787] kubelet initialised
	I1128 00:44:17.548814   45815 kubeadm.go:788] duration metric: took 15.969396ms waiting for restarted kubelet to initialise ...
	I1128 00:44:17.548824   45815 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:17.569590   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:17.388160   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:44:17.415589   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:44:17.443880   45269 provision.go:86] duration metric: configureAuth took 738.257631ms
	I1128 00:44:17.443913   45269 buildroot.go:189] setting minikube options for container-runtime
	I1128 00:44:17.444142   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:44:17.444240   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.447355   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447699   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.447726   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.447980   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.448213   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448382   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.448542   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.448730   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.449148   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.449173   45269 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 00:44:17.825162   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 00:44:17.825202   45269 machine.go:91] provisioned docker machine in 1.445550198s
	I1128 00:44:17.825215   45269 start.go:300] post-start starting for "old-k8s-version-732472" (driver="kvm2")
	I1128 00:44:17.825229   45269 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:44:17.825255   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:17.825631   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:44:17.825665   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.829047   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829650   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.829813   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.829885   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.830108   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.830270   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.830427   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:17.933926   45269 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:44:17.939164   45269 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 00:44:17.939192   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/addons for local assets ...
	I1128 00:44:17.939273   45269 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-4749/.minikube/files for local assets ...
	I1128 00:44:17.939364   45269 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem -> 119302.pem in /etc/ssl/certs
	I1128 00:44:17.939481   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:44:17.950901   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:17.983827   45269 start.go:303] post-start completed in 158.593642ms
	I1128 00:44:17.983856   45269 fix.go:56] fixHost completed within 21.237897087s
	I1128 00:44:17.983880   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:17.988473   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.988983   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:17.989011   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:17.989353   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:17.989611   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989755   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:17.989981   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:17.990202   45269 main.go:141] libmachine: Using SSH client type: native
	I1128 00:44:17.990729   45269 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I1128 00:44:17.990748   45269 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 00:44:18.139114   45269 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701132258.087547922
	
	I1128 00:44:18.139142   45269 fix.go:206] guest clock: 1701132258.087547922
	I1128 00:44:18.139154   45269 fix.go:219] Guest: 2023-11-28 00:44:18.087547922 +0000 UTC Remote: 2023-11-28 00:44:17.983860571 +0000 UTC m=+360.654750753 (delta=103.687351ms)
	I1128 00:44:18.139206   45269 fix.go:190] guest clock delta is within tolerance: 103.687351ms
	I1128 00:44:18.139217   45269 start.go:83] releasing machines lock for "old-k8s-version-732472", held for 21.393285553s
	I1128 00:44:18.139256   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.139552   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:18.142899   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143376   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.143407   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.143562   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144123   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144308   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:44:18.144414   45269 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:44:18.144473   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.144586   45269 ssh_runner.go:195] Run: cat /version.json
	I1128 00:44:18.144614   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:44:18.147761   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.147994   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148459   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148542   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148581   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:18.148605   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:18.148878   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.148892   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:44:18.149080   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149094   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:44:18.149266   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149288   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:44:18.149473   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.149488   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:44:18.271569   45269 ssh_runner.go:195] Run: systemctl --version
	I1128 00:44:18.277814   45269 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 00:44:18.432301   45269 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 00:44:18.438677   45269 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 00:44:18.438749   45269 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:44:18.455128   45269 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 00:44:18.455155   45269 start.go:472] detecting cgroup driver to use...
	I1128 00:44:18.455229   45269 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 00:44:18.472928   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 00:44:18.490329   45269 docker.go:203] disabling cri-docker service (if available) ...
	I1128 00:44:18.490409   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 00:44:18.505705   45269 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 00:44:18.523509   45269 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 00:44:18.696691   45269 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 00:44:18.829641   45269 docker.go:219] disabling docker service ...
	I1128 00:44:18.829775   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 00:44:18.847903   45269 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 00:44:18.863690   45269 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 00:44:19.002181   45269 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 00:44:19.130955   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 00:44:19.146034   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:44:19.165714   45269 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 00:44:19.165790   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.176303   45269 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 00:44:19.176368   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.186698   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.196137   45269 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 00:44:19.205054   45269 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:44:19.215067   45269 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:44:19.224332   45269 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 00:44:19.224376   45269 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 00:44:19.238079   45269 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:44:19.246692   45269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:44:19.360913   45269 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 00:44:19.548488   45269 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 00:44:19.548563   45269 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 00:44:19.553293   45269 start.go:540] Will wait 60s for crictl version
	I1128 00:44:19.553362   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:19.557103   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:44:19.605572   45269 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 00:44:19.605662   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.655808   45269 ssh_runner.go:195] Run: crio --version
	I1128 00:44:19.709415   45269 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1128 00:44:17.346814   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.847354   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:17.878161   46126 api_server.go:72] duration metric: took 2.560990106s to wait for apiserver process to appear ...
	I1128 00:44:17.878189   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:17.878218   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.878696   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:17.878732   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:17.879110   46126 api_server.go:269] stopped: https://192.168.72.242:8444/healthz: Get "https://192.168.72.242:8444/healthz": dial tcp 192.168.72.242:8444: connect: connection refused
	I1128 00:44:18.379800   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:19.710653   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetIP
	I1128 00:44:19.713912   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714358   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:44:19.714402   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:44:19.714586   45269 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 00:44:19.719516   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:19.736367   45269 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 00:44:19.736422   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:19.788917   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:19.789021   45269 ssh_runner.go:195] Run: which lz4
	I1128 00:44:19.793502   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 00:44:19.797933   45269 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 00:44:19.797967   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1128 00:44:21.595649   45269 crio.go:444] Took 1.802185 seconds to copy over tarball
	I1128 00:44:21.595754   45269 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 00:44:19.483696   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:21.485632   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:19.612824   45815 pod_ready.go:102] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:22.111469   45815 pod_ready.go:92] pod "coredns-76f75df574-54p94" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.111506   45815 pod_ready.go:81] duration metric: took 4.541884744s waiting for pod "coredns-76f75df574-54p94" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.111522   45815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118896   45815 pod_ready.go:92] pod "coredns-76f75df574-9ptz7" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:22.118916   45815 pod_ready.go:81] duration metric: took 7.386009ms waiting for pod "coredns-76f75df574-9ptz7" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.118925   45815 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:22.651574   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.651606   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.651632   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.731086   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:22.731124   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:22.879396   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:22.889686   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:22.889721   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.380219   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.387416   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.387458   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:23.880170   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:23.886215   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:44:23.886286   46126 api_server.go:103] status: https://192.168.72.242:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:44:24.380095   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:44:24.387531   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:44:24.411131   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:44:24.411169   46126 api_server.go:131] duration metric: took 6.532961174s to wait for apiserver health ...
	I1128 00:44:24.411180   46126 cni.go:84] Creating CNI manager for ""
	I1128 00:44:24.411186   46126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:24.701599   46126 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:24.853101   46126 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:24.878687   46126 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:24.924669   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:24.942030   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:44:24.942063   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:44:24.942074   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:44:24.942084   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:44:24.942094   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:44:24.942104   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:44:24.942115   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:44:24.942134   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:44:24.942152   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:44:24.942163   46126 system_pods.go:74] duration metric: took 17.475554ms to wait for pod list to return data ...
	I1128 00:44:24.942224   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:26.037379   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:26.037423   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:26.037450   46126 node_conditions.go:105] duration metric: took 1.095218932s to run NodePressure ...
	I1128 00:44:26.037473   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:27.084620   46126 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.047120714s)
	I1128 00:44:27.084659   46126 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100248   46126 kubeadm.go:787] kubelet initialised
	I1128 00:44:27.100282   46126 kubeadm.go:788] duration metric: took 15.606572ms waiting for restarted kubelet to initialise ...
	I1128 00:44:27.100293   46126 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:27.108069   46126 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.117188   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117221   46126 pod_ready.go:81] duration metric: took 9.127662ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.117238   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.117247   46126 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.123182   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123213   46126 pod_ready.go:81] duration metric: took 5.9547ms waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.123226   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.123235   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.130170   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130196   46126 pod_ready.go:81] duration metric: took 6.952194ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.130209   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.130216   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.136895   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136925   46126 pod_ready.go:81] duration metric: took 6.699975ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.136940   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.136950   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:24.811723   45269 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.215918902s)
	I1128 00:44:24.811757   45269 crio.go:451] Took 3.216081 seconds to extract the tarball
	I1128 00:44:24.811769   45269 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 00:44:24.856120   45269 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 00:44:24.918138   45269 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 00:44:24.918185   45269 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 00:44:24.918257   45269 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.918296   45269 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.918305   45269 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1128 00:44:24.918314   45269 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.918297   45269 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.918261   45269 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.918264   45269 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.918585   45269 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.919955   45269 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:24.919959   45269 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:24.919988   45269 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:24.919964   45269 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:24.920093   45269 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:24.920302   45269 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:24.920482   45269 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:24.920497   45269 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1128 00:44:25.041095   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.048823   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.071401   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1128 00:44:25.073489   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.081089   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.083887   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.100582   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.150855   45269 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1128 00:44:25.150909   45269 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.150960   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.151148   45269 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1128 00:44:25.151198   45269 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.151250   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.181984   45269 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1128 00:44:25.182039   45269 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1128 00:44:25.182089   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.260634   45269 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1128 00:44:25.260687   45269 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.260744   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269386   45269 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1128 00:44:25.269436   45269 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1128 00:44:25.269460   45269 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.269480   45269 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.269508   45269 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1128 00:44:25.269517   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269539   45269 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.269552   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269573   45269 ssh_runner.go:195] Run: which crictl
	I1128 00:44:25.269626   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1128 00:44:25.269642   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1128 00:44:25.269701   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1128 00:44:25.269733   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1128 00:44:25.368354   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1128 00:44:25.368405   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1128 00:44:25.368462   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1128 00:44:25.368474   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1128 00:44:25.368536   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1128 00:44:25.368537   45269 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.375204   45269 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1128 00:44:25.375378   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1128 00:44:25.439797   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1128 00:44:25.465699   45269 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1128 00:44:25.465731   45269 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465788   45269 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1128 00:44:25.465795   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1128 00:44:25.465810   45269 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1128 00:44:25.797872   45269 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:27.031275   45269 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.233351991s)
	I1128 00:44:27.031525   45269 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.565711109s)
	I1128 00:44:27.031549   45269 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1128 00:44:27.031594   45269 cache_images.go:92] LoadImages completed in 2.113388877s
	W1128 00:44:27.031667   45269 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-4749/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1128 00:44:27.031754   45269 ssh_runner.go:195] Run: crio config
	I1128 00:44:27.100851   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:27.100882   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:27.100901   45269 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:44:27.100924   45269 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-732472 NodeName:old-k8s-version-732472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1128 00:44:27.101119   45269 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-732472"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-732472
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.172:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:44:27.101241   45269 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-732472 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 00:44:27.101312   45269 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1128 00:44:27.111964   45269 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:44:27.112049   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:44:27.122796   45269 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1128 00:44:27.149768   45269 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:44:27.168520   45269 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1128 00:44:27.187296   45269 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I1128 00:44:27.191606   45269 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:44:27.205482   45269 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472 for IP: 192.168.39.172
	I1128 00:44:27.205521   45269 certs.go:190] acquiring lock for shared ca certs: {Name:mkb0405e4435998d8a2cfe595007b5d8f238c193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:27.205720   45269 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key
	I1128 00:44:27.205758   45269 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key
	I1128 00:44:27.205825   45269 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.key
	I1128 00:44:27.205885   45269 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key.ee96354a
	I1128 00:44:27.205931   45269 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key
	I1128 00:44:27.206060   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem (1338 bytes)
	W1128 00:44:27.206115   45269 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930_empty.pem, impossibly tiny 0 bytes
	I1128 00:44:27.206130   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:44:27.206176   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:44:27.206214   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:44:27.206251   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/certs/home/jenkins/minikube-integration/17206-4749/.minikube/certs/key.pem (1679 bytes)
	I1128 00:44:27.206313   45269 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem (1708 bytes)
	I1128 00:44:27.207009   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:44:27.233932   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 00:44:27.258138   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:44:27.282203   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 00:44:27.309304   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:44:27.335945   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:44:27.360118   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:44:23.984808   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.118398   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:27.491683   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491724   46126 pod_ready.go:81] duration metric: took 354.756767ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.491736   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-proxy-2sfbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.491745   46126 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:27.890269   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890299   46126 pod_ready.go:81] duration metric: took 398.544263ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:27.890316   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.890324   46126 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.289016   46126 pod_ready.go:97] node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289043   46126 pod_ready.go:81] duration metric: took 398.709637ms waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:44:28.289055   46126 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-488423" hosting pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:28.289062   46126 pod_ready.go:38] duration metric: took 1.188759196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:28.289084   46126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:44:28.301648   46126 ops.go:34] apiserver oom_adj: -16
	I1128 00:44:28.301676   46126 kubeadm.go:640] restartCluster took 24.277487612s
	I1128 00:44:28.301683   46126 kubeadm.go:406] StartCluster complete in 24.339149368s
	I1128 00:44:28.301697   46126 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.301770   46126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:44:28.303560   46126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:44:28.303802   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:44:28.303915   46126 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:44:28.303994   46126 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304023   46126 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304038   46126 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:44:28.304040   46126 config.go:182] Loaded profile config "default-k8s-diff-port-488423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:44:28.304063   46126 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304117   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304118   46126 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.304134   46126 addons.go:240] addon metrics-server should already be in state true
	I1128 00:44:28.304220   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.304547   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304589   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304669   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.304741   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.304928   46126 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-488423"
	I1128 00:44:28.304956   46126 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-488423"
	I1128 00:44:28.305388   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.305437   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.310450   46126 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-488423" context rescaled to 1 replicas
	I1128 00:44:28.310496   46126 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.242 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:44:28.312602   46126 out.go:177] * Verifying Kubernetes components...
	I1128 00:44:28.314027   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:44:28.321407   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I1128 00:44:28.321423   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I1128 00:44:28.322247   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322287   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.322797   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322820   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.322942   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.322968   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.323210   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323242   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I1128 00:44:28.323323   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.323556   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.323775   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323818   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323857   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.323891   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.323937   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.323957   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.324293   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.324471   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.327954   46126 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-488423"
	W1128 00:44:28.327972   46126 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:44:28.327993   46126 host.go:66] Checking if "default-k8s-diff-port-488423" exists ...
	I1128 00:44:28.328327   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.328355   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.342376   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I1128 00:44:28.342789   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.343325   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.343366   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.343751   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.343978   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I1128 00:44:28.343995   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.344392   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.344983   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.345009   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.345366   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.345910   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.348242   46126 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:44:28.346449   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I1128 00:44:28.350126   46126 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.350147   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:44:28.350166   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.346666   46126 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:44:28.350250   46126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:44:28.348589   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.350911   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.350930   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.351442   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.351817   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.353691   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.353876   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.355460   46126 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:44:24.141365   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.518655   45815 pod_ready.go:102] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:26.887843   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.887877   45815 pod_ready.go:81] duration metric: took 4.768943982s waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.887891   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909504   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:26.909600   45815 pod_ready.go:81] duration metric: took 21.699474ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:26.909627   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:28.354335   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.354504   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.357068   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:44:28.357088   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:44:28.357094   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.357109   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.357228   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.357356   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.357475   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.360015   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360725   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.360785   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.360994   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.361177   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.361341   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.361503   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.368150   46126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40591
	I1128 00:44:28.368511   46126 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:44:28.369005   46126 main.go:141] libmachine: Using API Version  1
	I1128 00:44:28.369023   46126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:44:28.369326   46126 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:44:28.369481   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetState
	I1128 00:44:28.370807   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .DriverName
	I1128 00:44:28.371066   46126 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.371078   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:44:28.371092   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHHostname
	I1128 00:44:28.373819   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374409   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHPort
	I1128 00:44:28.374510   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:3b:25", ip: ""} in network mk-default-k8s-diff-port-488423: {Iface:virbr3 ExpiryTime:2023-11-28 01:37:12 +0000 UTC Type:0 Mac:52:54:00:4c:3b:25 Iaid: IPaddr:192.168.72.242 Prefix:24 Hostname:default-k8s-diff-port-488423 Clientid:01:52:54:00:4c:3b:25}
	I1128 00:44:28.374541   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | domain default-k8s-diff-port-488423 has defined IP address 192.168.72.242 and MAC address 52:54:00:4c:3b:25 in network mk-default-k8s-diff-port-488423
	I1128 00:44:28.374602   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHKeyPath
	I1128 00:44:28.374688   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .GetSSHUsername
	I1128 00:44:28.374768   46126 sshutil.go:53] new ssh client: &{IP:192.168.72.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/default-k8s-diff-port-488423/id_rsa Username:docker}
	I1128 00:44:28.474380   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:44:28.505183   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:44:28.505206   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:44:28.536550   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:44:28.584832   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:44:28.584857   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:44:28.626477   46126 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 00:44:28.626473   46126 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:28.644406   46126 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:28.644436   46126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:44:28.671872   46126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:44:29.867337   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330746736s)
	I1128 00:44:29.867437   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867451   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867490   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.393076585s)
	I1128 00:44:29.867532   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867553   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867827   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.867841   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.867850   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.867858   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.867988   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868006   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868029   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.868038   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.868129   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.868145   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868159   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868381   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.868400   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.868429   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.876482   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.876505   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.876724   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.876736   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885484   46126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.213575767s)
	I1128 00:44:29.885534   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885551   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885841   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.885862   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.885873   46126 main.go:141] libmachine: Making call to close driver server
	I1128 00:44:29.885883   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) Calling .Close
	I1128 00:44:29.885887   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886153   46126 main.go:141] libmachine: (default-k8s-diff-port-488423) DBG | Closing plugin on server side
	I1128 00:44:29.886164   46126 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:44:29.886194   46126 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:44:29.886211   46126 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-488423"
	I1128 00:44:29.889173   46126 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:44:29.890607   46126 addons.go:502] enable addons completed in 1.586699021s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:44:30.716680   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:27.385529   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:44:27.411354   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/certs/11930.pem --> /usr/share/ca-certificates/11930.pem (1338 bytes)
	I1128 00:44:27.439142   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/ssl/certs/119302.pem --> /usr/share/ca-certificates/119302.pem (1708 bytes)
	I1128 00:44:27.466763   45269 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:44:27.497738   45269 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:44:27.518132   45269 ssh_runner.go:195] Run: openssl version
	I1128 00:44:27.524720   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11930.pem && ln -fs /usr/share/ca-certificates/11930.pem /etc/ssl/certs/11930.pem"
	I1128 00:44:27.537673   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542561   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:37 /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.542623   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11930.pem
	I1128 00:44:27.548137   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11930.pem /etc/ssl/certs/51391683.0"
	I1128 00:44:27.558112   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/119302.pem && ln -fs /usr/share/ca-certificates/119302.pem /etc/ssl/certs/119302.pem"
	I1128 00:44:27.568318   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573638   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:37 /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.573697   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/119302.pem
	I1128 00:44:27.579739   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/119302.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:44:27.589908   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:44:27.599937   45269 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606264   45269 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.606340   45269 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:44:27.612850   45269 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:44:27.623388   45269 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:44:27.628140   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:44:27.634670   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:44:27.642071   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:44:27.650207   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:44:27.656836   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:44:27.662837   45269 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:44:27.668909   45269 kubeadm.go:404] StartCluster: {Name:old-k8s-version-732472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-732472 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:44:27.669005   45269 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 00:44:27.669075   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:27.711918   45269 cri.go:89] found id: ""
	I1128 00:44:27.711993   45269 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:44:27.722058   45269 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:44:27.722084   45269 kubeadm.go:636] restartCluster start
	I1128 00:44:27.722146   45269 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:44:27.731619   45269 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.733224   45269 kubeconfig.go:92] found "old-k8s-version-732472" server: "https://192.168.39.172:8443"
	I1128 00:44:27.736867   45269 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:44:27.747794   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.747862   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.762055   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:27.762079   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:27.762146   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:27.773241   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.273910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.274001   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.286159   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.773393   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:28.773492   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:28.785781   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.274130   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.274199   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.289388   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:29.773916   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:29.774022   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:29.789483   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.273920   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.274026   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.285579   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:30.773910   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:30.774005   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:30.785536   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.273906   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.273977   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.285344   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:31.774284   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:31.774352   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:31.786435   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.273928   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.274008   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.289424   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:28.484735   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.983088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:28.945293   45815 pod_ready.go:102] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:30.445111   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.445133   45815 pod_ready.go:81] duration metric: took 3.535488087s waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.445143   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450322   45815 pod_ready.go:92] pod "kube-proxy-trr4j" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.450342   45815 pod_ready.go:81] duration metric: took 5.193276ms waiting for pod "kube-proxy-trr4j" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.450350   45815 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455002   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:30.455021   45815 pod_ready.go:81] duration metric: took 4.664949ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:30.455030   45815 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:32.915566   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.717086   46126 node_ready.go:58] node "default-k8s-diff-port-488423" has status "Ready":"False"
	I1128 00:44:33.216905   46126 node_ready.go:49] node "default-k8s-diff-port-488423" has status "Ready":"True"
	I1128 00:44:33.216930   46126 node_ready.go:38] duration metric: took 4.590426391s waiting for node "default-k8s-diff-port-488423" to be "Ready" ...
	I1128 00:44:33.216938   46126 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:44:33.223257   46126 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744567   46126 pod_ready.go:92] pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:33.744592   46126 pod_ready.go:81] duration metric: took 521.313062ms waiting for pod "coredns-5dd5756b68-n7qpb" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:33.744601   46126 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:35.763867   46126 pod_ready.go:102] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:32.773549   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:32.773643   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:32.785461   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.273911   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.273994   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.285646   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:33.773944   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:33.774046   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:33.786576   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.273902   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.273969   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.285791   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:34.773895   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:34.773965   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:34.785934   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.273675   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.273738   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.285549   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:35.773954   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:35.774041   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:35.786010   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.273591   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.273659   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.284794   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:36.773864   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:36.773931   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:36.786610   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:37.273899   45269 api_server.go:166] Checking apiserver status ...
	I1128 00:44:37.274025   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:44:37.285678   45269 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:44:32.983159   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:34.985149   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.482210   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:35.413821   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.417790   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.768358   46126 pod_ready.go:92] pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.768398   46126 pod_ready.go:81] duration metric: took 4.023788643s waiting for pod "etcd-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.768411   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775805   46126 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.775835   46126 pod_ready.go:81] duration metric: took 7.41435ms waiting for pod "kube-apiserver-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.775847   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788110   46126 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:37.788139   46126 pod_ready.go:81] duration metric: took 12.28235ms waiting for pod "kube-controller-manager-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:37.788151   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018402   46126 pod_ready.go:92] pod "kube-proxy-2sfbm" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.018426   46126 pod_ready.go:81] duration metric: took 230.267334ms waiting for pod "kube-proxy-2sfbm" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.018443   46126 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818531   46126 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace has status "Ready":"True"
	I1128 00:44:38.818559   46126 pod_ready.go:81] duration metric: took 800.108369ms waiting for pod "kube-scheduler-default-k8s-diff-port-488423" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:38.818572   46126 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	I1128 00:44:41.127953   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:37.748214   45269 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:44:37.748260   45269 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:44:37.748276   45269 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 00:44:37.748334   45269 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 00:44:37.796781   45269 cri.go:89] found id: ""
	I1128 00:44:37.796866   45269 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:44:37.814832   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:44:37.824395   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:44:37.824469   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833592   45269 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:44:37.833618   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:37.955071   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:38.939529   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.160852   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.243789   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:39.372434   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:44:39.372525   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.405594   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:39.927024   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.426600   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.927163   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:44:40.966905   45269 api_server.go:72] duration metric: took 1.594470962s to wait for apiserver process to appear ...
	I1128 00:44:40.966937   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:44:40.966959   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967412   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:40.967457   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:40.967851   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I1128 00:44:41.468536   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:39.483204   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:41.483578   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:39.914738   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:42.415305   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:43.130157   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:45.628970   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.468813   45269 api_server.go:269] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1128 00:44:46.468859   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:43.984318   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:46.483855   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:44.914911   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.415274   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:47.435553   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.435586   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.435601   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.480977   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.481002   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.481012   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.506064   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:44:47.506098   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:44:47.968355   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:47.974731   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:47.974766   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.468954   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.484597   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1128 00:44:48.484627   45269 api_server.go:103] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1128 00:44:48.968810   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:44:48.979310   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:44:48.987751   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:44:48.987782   45269 api_server.go:131] duration metric: took 8.020836981s to wait for apiserver health ...
	I1128 00:44:48.987793   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:44:48.987801   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:44:48.989720   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:44:48.129394   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:50.130239   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:48.991320   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:44:49.001231   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:44:49.019895   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:44:49.027389   45269 system_pods.go:59] 7 kube-system pods found
	I1128 00:44:49.027417   45269 system_pods.go:61] "coredns-5644d7b6d9-9sh7z" [dcc226fb-5fd9-4757-bd93-1113f185cdce] Running
	I1128 00:44:49.027422   45269 system_pods.go:61] "etcd-old-k8s-version-732472" [a5899a5a-4812-41e1-9251-78fdaeea9597] Running
	I1128 00:44:49.027428   45269 system_pods.go:61] "kube-apiserver-old-k8s-version-732472" [13d2df8c-84a3-4bd4-8eab-ed9f732a3839] Running
	I1128 00:44:49.027435   45269 system_pods.go:61] "kube-controller-manager-old-k8s-version-732472" [6dc1e479-1a3a-4b9e-acd6-1183a25aece4] Running
	I1128 00:44:49.027441   45269 system_pods.go:61] "kube-proxy-jqrks" [e8fd665a-099e-4941-a8f2-917d2b864eeb] Running
	I1128 00:44:49.027447   45269 system_pods.go:61] "kube-scheduler-old-k8s-version-732472" [de147a31-927e-4051-b6ae-05ddf59290c8] Running
	I1128 00:44:49.027457   45269 system_pods.go:61] "storage-provisioner" [8d7e725e-6c26-4435-8605-88c7d924f5ca] Running
	I1128 00:44:49.027469   45269 system_pods.go:74] duration metric: took 7.544096ms to wait for pod list to return data ...
	I1128 00:44:49.027479   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:44:49.032133   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:44:49.032170   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:44:49.032183   45269 node_conditions.go:105] duration metric: took 4.695493ms to run NodePressure ...
	I1128 00:44:49.032203   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:44:49.293443   45269 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:44:49.297880   45269 retry.go:31] will retry after 216.894607ms: kubelet not initialised
	I1128 00:44:49.528912   45269 retry.go:31] will retry after 354.406288ms: kubelet not initialised
	I1128 00:44:49.897328   45269 retry.go:31] will retry after 462.959721ms: kubelet not initialised
	I1128 00:44:50.368260   45269 retry.go:31] will retry after 930.99638ms: kubelet not initialised
	I1128 00:44:51.303993   45269 retry.go:31] will retry after 1.275477572s: kubelet not initialised
	I1128 00:44:48.984387   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:51.482900   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:49.916072   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.415253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.626182   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.626822   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:56.627881   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:52.584797   45269 retry.go:31] will retry after 2.542158001s: kubelet not initialised
	I1128 00:44:55.132600   45269 retry.go:31] will retry after 1.850404606s: kubelet not initialised
	I1128 00:44:56.987924   45269 retry.go:31] will retry after 2.371310185s: kubelet not initialised
	I1128 00:44:53.483557   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:55.982236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:54.916135   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:57.415818   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.127409   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:01.629561   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.366141   45269 retry.go:31] will retry after 8.068803464s: kubelet not initialised
	I1128 00:44:57.983189   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:00.482336   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.483708   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:44:59.915991   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:02.414672   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.127296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.127766   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.484008   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.983257   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:04.415147   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:06.914282   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.128322   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:10.627792   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:07.439538   45269 retry.go:31] will retry after 10.31431504s: kubelet not initialised
	I1128 00:45:08.985186   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.481933   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:08.914385   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:11.414899   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:12.628874   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:14.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.126592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.487653   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.983710   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:13.915497   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:15.915686   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:18.416396   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:19.127337   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:21.128352   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:17.759682   45269 retry.go:31] will retry after 12.137072248s: kubelet not initialised
	I1128 00:45:18.482187   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.982360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:20.915228   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.918669   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:23.630252   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.128326   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:22.982597   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:24.983348   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:26.985418   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:25.415620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:27.914150   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:28.626533   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:30.633655   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.902379   45269 kubeadm.go:787] kubelet initialised
	I1128 00:45:29.902403   45269 kubeadm.go:788] duration metric: took 40.608931816s waiting for restarted kubelet to initialise ...
	I1128 00:45:29.902410   45269 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:45:29.908442   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914018   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.914055   45269 pod_ready.go:81] duration metric: took 5.584146ms waiting for pod "coredns-5644d7b6d9-9sh7z" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.914069   45269 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918699   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.918720   45269 pod_ready.go:81] duration metric: took 4.644035ms waiting for pod "coredns-5644d7b6d9-v8z7h" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.918729   45269 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922818   45269 pod_ready.go:92] pod "etcd-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.922837   45269 pod_ready.go:81] duration metric: took 4.102217ms waiting for pod "etcd-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.922846   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927182   45269 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:29.927208   45269 pod_ready.go:81] duration metric: took 4.354519ms waiting for pod "kube-apiserver-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.927220   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301553   45269 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.301583   45269 pod_ready.go:81] duration metric: took 374.352863ms waiting for pod "kube-controller-manager-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.301611   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700858   45269 pod_ready.go:92] pod "kube-proxy-jqrks" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:30.700879   45269 pod_ready.go:81] duration metric: took 399.260896ms waiting for pod "kube-proxy-jqrks" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:30.700890   45269 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103319   45269 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace has status "Ready":"True"
	I1128 00:45:31.103340   45269 pod_ready.go:81] duration metric: took 402.442769ms waiting for pod "kube-scheduler-old-k8s-version-732472" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:31.103349   45269 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	I1128 00:45:29.482088   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:31.483235   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:29.915117   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:32.416142   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.127196   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.127500   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.128846   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.422466   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.908596   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:33.983360   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:35.983776   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:34.417575   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:36.915005   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.627473   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.126292   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:37.908783   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.909842   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.910185   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:38.481697   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:40.481935   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:42.483458   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:39.415244   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:41.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.127088   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.128254   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.409802   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.415828   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.986515   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:47.483162   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:44.414253   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:46.416386   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.628705   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:51.130754   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.908171   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.910974   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:49.985617   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:52.483720   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:48.915063   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:50.915382   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.414813   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.627668   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.629312   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:53.409415   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.420993   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:54.983055   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:56.983251   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:55.919627   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.415481   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:58.129666   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.629368   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:57.910151   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.408805   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:45:59.485375   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:01.983754   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:00.915086   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.413478   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:03.129933   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.627697   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:02.410888   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.910323   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:04.482593   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:06.981922   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:05.414437   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.415659   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.628741   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.126717   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:12.127246   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:07.408374   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.411381   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.416658   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:08.982790   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:10.984134   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:09.914828   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:11.915812   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.135673   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.626139   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.909480   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.409873   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:13.481792   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:15.482823   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:14.416315   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:16.914123   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.628828   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.131592   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:18.411060   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.910071   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:17.983098   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:20.482047   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:22.483266   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:19.413826   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:21.415442   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.626664   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.626823   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.424355   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:25.908255   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:24.984606   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.482265   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:23.915227   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:26.417059   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.628773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.126818   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:27.911487   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:30.409652   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:29.485507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.983913   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:28.916438   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:31.415565   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.626887   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.628401   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.128691   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:32.910776   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.421469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:34.482605   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:36.982844   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:33.913533   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:35.914337   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.914708   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.627072   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.627591   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:37.908233   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.910199   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:38.983620   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.482862   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:39.914965   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:41.915003   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.628492   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.127393   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:42.408895   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:44.409264   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.909077   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.483111   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:45.483236   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:43.916039   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:46.415407   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.627253   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.127503   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.418512   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:51.427899   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:47.982977   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:49.983264   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.483168   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:48.914124   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:50.915620   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:52.919567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.627296   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.627334   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:53.908531   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:56.408610   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:54.983084   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.481889   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:55.414154   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:57.416518   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.126605   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.127372   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:02.127896   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:58.410152   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:00.910206   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.482177   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.982997   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:46:59.915381   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:01.915574   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.626760   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.628849   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.417243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.417887   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:03.983490   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:05.984161   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:04.414677   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:06.420179   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:09.127843   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:11.626987   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:07.908838   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.408385   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.482404   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.484146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:08.914093   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:10.922145   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.417231   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:13.627586   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.628294   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.410728   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.910177   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:16.910469   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:12.982123   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:14.984037   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:17.483771   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:15.915323   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.415070   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:18.129617   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.628266   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.423065   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:21.908978   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:19.983122   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.482857   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:20.415232   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:22.915218   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.129285   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:25.627839   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:23.910794   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:26.409956   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.985146   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.482512   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:24.916041   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.415836   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:27.627978   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.127213   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:32.127569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:28.413035   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:30.909092   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.483528   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.983745   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:29.913604   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:31.914567   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.129952   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.626951   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:33.414345   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:35.414559   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.481916   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.482024   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:34.413520   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:36.414517   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.416081   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.627773   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:41.126690   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:37.414665   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:39.908876   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:38.482323   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.983125   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:40.914615   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.415528   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.128692   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.627228   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:42.412788   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:44.909732   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:46.910133   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:43.482424   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.482507   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.482562   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:45.416841   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:47.914229   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:48.127074   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.627355   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.411030   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.420657   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:49.483765   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:51.982325   45580 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:50.414235   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.414715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:52.627557   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:54.628111   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:57.129482   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.910232   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.409320   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:53.795074   45580 pod_ready.go:81] duration metric: took 4m0.000752019s waiting for pod "metrics-server-57f55c9bc5-sx4m7" in "kube-system" namespace to be "Ready" ...
	E1128 00:47:53.795108   45580 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:47:53.795124   45580 pod_ready.go:38] duration metric: took 4m9.844437599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:47:53.795148   45580 kubeadm.go:640] restartCluster took 4m29.759592783s
	W1128 00:47:53.795209   45580 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:47:53.795237   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:47:54.416610   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:56.915781   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:59.129569   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.627046   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.409599   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:00.409906   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:47:58.916155   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:01.416966   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:03.627676   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.126607   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:02.410451   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:04.411074   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.912243   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.609428   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.814163406s)
	I1128 00:48:07.609508   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:07.624300   45580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:07.634606   45580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:07.644733   45580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:07.644802   45580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:03.915780   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:06.416602   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:08.128657   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:10.629487   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:09.411193   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.908147   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:07.867577   45580 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:08.915404   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:11.416668   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.129233   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.630498   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.909762   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:16.409160   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:13.916628   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:15.916715   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:17.917022   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.126081   45580 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 00:48:19.126157   45580 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:19.126245   45580 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:19.126356   45580 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:19.126476   45580 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:19.126544   45580 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:19.128354   45580 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:19.128461   45580 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:19.128546   45580 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:19.128664   45580 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:19.128807   45580 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:19.128927   45580 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:19.129001   45580 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:19.129100   45580 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:19.129175   45580 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:19.129275   45580 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:19.129387   45580 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:19.129432   45580 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:19.129501   45580 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:19.129559   45580 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:19.129616   45580 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:19.129696   45580 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:19.129760   45580 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:19.129853   45580 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:19.129921   45580 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:19.131350   45580 out.go:204]   - Booting up control plane ...
	I1128 00:48:19.131462   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:19.131578   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:19.131674   45580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:19.131798   45580 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:19.131914   45580 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:19.131972   45580 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:19.132149   45580 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:19.132245   45580 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502916 seconds
	I1128 00:48:19.132388   45580 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:19.132540   45580 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:19.132619   45580 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:19.132850   45580 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-304541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:19.132959   45580 kubeadm.go:322] [bootstrap-token] Using token: tbyyd7.r005gkl9z2ll5pno
	I1128 00:48:19.134488   45580 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:19.134603   45580 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:19.134691   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:19.134841   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:19.135030   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:19.135200   45580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:19.135311   45580 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:19.135453   45580 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:19.135532   45580 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:19.135600   45580 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:19.135611   45580 kubeadm.go:322] 
	I1128 00:48:19.135692   45580 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:19.135700   45580 kubeadm.go:322] 
	I1128 00:48:19.135798   45580 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:19.135807   45580 kubeadm.go:322] 
	I1128 00:48:19.135840   45580 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:19.135916   45580 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:19.135987   45580 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:19.135996   45580 kubeadm.go:322] 
	I1128 00:48:19.136074   45580 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:19.136084   45580 kubeadm.go:322] 
	I1128 00:48:19.136153   45580 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:19.136161   45580 kubeadm.go:322] 
	I1128 00:48:19.136231   45580 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:19.136329   45580 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:19.136439   45580 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:19.136448   45580 kubeadm.go:322] 
	I1128 00:48:19.136552   45580 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:19.136662   45580 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:19.136674   45580 kubeadm.go:322] 
	I1128 00:48:19.136766   45580 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.136878   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:19.136907   45580 kubeadm.go:322] 	--control-plane 
	I1128 00:48:19.136913   45580 kubeadm.go:322] 
	I1128 00:48:19.136986   45580 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:19.136998   45580 kubeadm.go:322] 
	I1128 00:48:19.137097   45580 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tbyyd7.r005gkl9z2ll5pno \
	I1128 00:48:19.137259   45580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:19.137282   45580 cni.go:84] Creating CNI manager for ""
	I1128 00:48:19.137290   45580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:19.138890   45580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:18.126502   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.131785   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:18.410659   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:20.910338   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:19.140172   45580 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:19.160540   45580 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:19.224333   45580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:19.224409   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.224455   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=embed-certs-304541 minikube.k8s.io/updated_at=2023_11_28T00_48_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.301346   45580 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:19.544274   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:19.656215   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.257645   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.757476   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.257246   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:21.757278   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.256655   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:22.757282   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:20.415051   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.914901   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:22.627184   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:24.627388   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.127311   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.409417   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:25.909086   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:23.257594   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:23.757135   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.257396   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:24.757508   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.257426   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.756605   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.256768   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:26.756656   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.256783   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:27.756856   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:25.414964   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.415763   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:28.257005   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:28.756875   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.256833   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:29.757261   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.257313   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:30.756918   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.257535   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.757356   45580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:31.917284   45580 kubeadm.go:1081] duration metric: took 12.692941702s to wait for elevateKubeSystemPrivileges.
	I1128 00:48:31.917326   45580 kubeadm.go:406] StartCluster complete in 5m7.933075195s
	I1128 00:48:31.917353   45580 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.917430   45580 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:48:31.919940   45580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:48:31.920855   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:48:31.921063   45580 config.go:182] Loaded profile config "embed-certs-304541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:48:31.921037   45580 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:48:31.921110   45580 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-304541"
	I1128 00:48:31.921123   45580 addons.go:69] Setting default-storageclass=true in profile "embed-certs-304541"
	I1128 00:48:31.921143   45580 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-304541"
	I1128 00:48:31.921148   45580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-304541"
	W1128 00:48:31.921152   45580 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:48:31.921116   45580 addons.go:69] Setting metrics-server=true in profile "embed-certs-304541"
	I1128 00:48:31.921213   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921220   45580 addons.go:231] Setting addon metrics-server=true in "embed-certs-304541"
	W1128 00:48:31.921229   45580 addons.go:240] addon metrics-server should already be in state true
	I1128 00:48:31.921265   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.921531   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921545   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921578   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921584   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.921594   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.921605   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.941345   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I1128 00:48:31.941374   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I1128 00:48:31.941359   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I1128 00:48:31.942009   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942040   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942449   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942460   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942477   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942488   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.942549   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.942937   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.942955   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.943129   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943134   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943300   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.943646   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.943671   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.943774   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.944439   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.944470   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.947579   45580 addons.go:231] Setting addon default-storageclass=true in "embed-certs-304541"
	W1128 00:48:31.947605   45580 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:48:31.947635   45580 host.go:66] Checking if "embed-certs-304541" exists ...
	I1128 00:48:31.948083   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.948114   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.964906   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I1128 00:48:31.964942   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I1128 00:48:31.966157   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966261   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.966778   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966795   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.966980   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.966999   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.967444   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967481   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I1128 00:48:31.967447   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.967636   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968331   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.968434   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:31.968812   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.968830   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.969729   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972519   45580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:48:31.970271   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:31.972982   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.974461   45580 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:31.974479   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:48:31.974498   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.976187   45580 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:48:31.974991   45580 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:48:31.977660   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.977907   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:48:31.977925   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:48:31.977943   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:31.978001   45580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:48:31.978243   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.978264   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.978506   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.978727   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.978954   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.979170   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.980878   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981226   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:31.981262   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:31.981399   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:31.981571   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:31.981690   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:31.981810   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:31.997812   45580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
	I1128 00:48:31.998404   45580 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:48:31.998989   45580 main.go:141] libmachine: Using API Version  1
	I1128 00:48:31.999016   45580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:48:31.999427   45580 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:48:31.999652   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetState
	I1128 00:48:32.001212   45580 main.go:141] libmachine: (embed-certs-304541) Calling .DriverName
	I1128 00:48:32.001482   45580 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.001496   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:48:32.001513   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHHostname
	I1128 00:48:32.002981   45580 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-304541" context rescaled to 1 replicas
	I1128 00:48:32.003019   45580 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.93 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:48:32.005961   45580 out.go:177] * Verifying Kubernetes components...
	I1128 00:48:29.127403   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:31.127830   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:27.911587   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.411923   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.004640   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.005211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHPort
	I1128 00:48:32.007586   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:32.007585   45580 main.go:141] libmachine: (embed-certs-304541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:1d:4f", ip: ""} in network mk-embed-certs-304541: {Iface:virbr2 ExpiryTime:2023-11-28 01:35:37 +0000 UTC Type:0 Mac:52:54:00:0a:1d:4f Iaid: IPaddr:192.168.50.93 Prefix:24 Hostname:embed-certs-304541 Clientid:01:52:54:00:0a:1d:4f}
	I1128 00:48:32.007700   45580 main.go:141] libmachine: (embed-certs-304541) DBG | domain embed-certs-304541 has defined IP address 192.168.50.93 and MAC address 52:54:00:0a:1d:4f in network mk-embed-certs-304541
	I1128 00:48:32.007722   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHKeyPath
	I1128 00:48:32.007894   45580 main.go:141] libmachine: (embed-certs-304541) Calling .GetSSHUsername
	I1128 00:48:32.008049   45580 sshutil.go:53] new ssh client: &{IP:192.168.50.93 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/embed-certs-304541/id_rsa Username:docker}
	I1128 00:48:32.213297   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:48:32.213322   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:48:32.255646   45580 node_ready.go:35] waiting up to 6m0s for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.255743   45580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:48:32.268542   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:48:32.270044   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:48:32.270066   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:48:32.304458   45580 node_ready.go:49] node "embed-certs-304541" has status "Ready":"True"
	I1128 00:48:32.304486   45580 node_ready.go:38] duration metric: took 48.802082ms waiting for node "embed-certs-304541" to be "Ready" ...
	I1128 00:48:32.304498   45580 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:32.320550   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:32.437814   45580 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:32.437852   45580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:48:32.462274   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:48:32.541622   45580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:48:29.418692   45815 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:30.455152   45815 pod_ready.go:81] duration metric: took 4m0.000108261s waiting for pod "metrics-server-57f55c9bc5-lh4m8" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:30.455199   45815 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:30.455216   45815 pod_ready.go:38] duration metric: took 4m12.906382743s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:30.455251   45815 kubeadm.go:640] restartCluster took 4m33.513232005s
	W1128 00:48:30.455312   45815 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:48:30.455356   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:48:34.327113   45580 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.071322786s)
	I1128 00:48:34.327155   45580 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 00:48:34.342711   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.074127133s)
	I1128 00:48:34.342776   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.342791   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343188   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.343284   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343328   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.343339   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.343348   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.343581   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.343598   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.366719   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.366754   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.367052   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.367104   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.367119   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.467705   45580 pod_ready.go:102] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.935662   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473338078s)
	I1128 00:48:34.935745   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.935814   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936143   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.936184   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936193   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.936203   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.936211   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.936435   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.936482   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977248   45580 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.435573064s)
	I1128 00:48:34.977318   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977345   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.977738   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.977785   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.977806   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.977824   45580 main.go:141] libmachine: Making call to close driver server
	I1128 00:48:34.977837   45580 main.go:141] libmachine: (embed-certs-304541) Calling .Close
	I1128 00:48:34.979823   45580 main.go:141] libmachine: (embed-certs-304541) DBG | Closing plugin on server side
	I1128 00:48:34.979823   45580 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:48:34.979849   45580 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:48:34.979860   45580 addons.go:467] Verifying addon metrics-server=true in "embed-certs-304541"
	I1128 00:48:34.981768   45580 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:48:33.129597   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.129880   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:32.912875   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:35.411225   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:34.983440   45580 addons.go:502] enable addons completed in 3.062399778s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:48:36.495977   45580 pod_ready.go:92] pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.496002   45580 pod_ready.go:81] duration metric: took 4.175421265s waiting for pod "coredns-5dd5756b68-6n54l" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.496012   45580 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508269   45580 pod_ready.go:92] pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.508293   45580 pod_ready.go:81] duration metric: took 12.274473ms waiting for pod "coredns-5dd5756b68-kjg5f" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.508302   45580 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515826   45580 pod_ready.go:92] pod "etcd-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.515855   45580 pod_ready.go:81] duration metric: took 7.545794ms waiting for pod "etcd-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.515873   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523206   45580 pod_ready.go:92] pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.523271   45580 pod_ready.go:81] duration metric: took 7.388614ms waiting for pod "kube-apiserver-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.523286   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529859   45580 pod_ready.go:92] pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.529881   45580 pod_ready.go:81] duration metric: took 6.58575ms waiting for pod "kube-controller-manager-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.529889   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857435   45580 pod_ready.go:92] pod "kube-proxy-w5ct2" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:36.857467   45580 pod_ready.go:81] duration metric: took 327.570428ms waiting for pod "kube-proxy-w5ct2" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:36.857481   45580 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257433   45580 pod_ready.go:92] pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace has status "Ready":"True"
	I1128 00:48:37.257455   45580 pod_ready.go:81] duration metric: took 399.966903ms waiting for pod "kube-scheduler-embed-certs-304541" in "kube-system" namespace to be "Ready" ...
	I1128 00:48:37.257462   45580 pod_ready.go:38] duration metric: took 4.952954771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:37.257476   45580 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:37.257523   45580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:37.275627   45580 api_server.go:72] duration metric: took 5.272574466s to wait for apiserver process to appear ...
	I1128 00:48:37.275656   45580 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:37.275673   45580 api_server.go:253] Checking apiserver healthz at https://192.168.50.93:8443/healthz ...
	I1128 00:48:37.283884   45580 api_server.go:279] https://192.168.50.93:8443/healthz returned 200:
	ok
	I1128 00:48:37.285716   45580 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:37.285744   45580 api_server.go:131] duration metric: took 10.080776ms to wait for apiserver health ...
	I1128 00:48:37.285766   45580 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:37.460530   45580 system_pods.go:59] 9 kube-system pods found
	I1128 00:48:37.460555   45580 system_pods.go:61] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.460560   45580 system_pods.go:61] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.460563   45580 system_pods.go:61] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.460568   45580 system_pods.go:61] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.460572   45580 system_pods.go:61] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.460575   45580 system_pods.go:61] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.460579   45580 system_pods.go:61] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.460585   45580 system_pods.go:61] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.460589   45580 system_pods.go:61] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.460597   45580 system_pods.go:74] duration metric: took 174.824783ms to wait for pod list to return data ...
	I1128 00:48:37.460619   45580 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:37.656404   45580 default_sa.go:45] found service account: "default"
	I1128 00:48:37.656431   45580 default_sa.go:55] duration metric: took 195.805836ms for default service account to be created ...
	I1128 00:48:37.656444   45580 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:37.861049   45580 system_pods.go:86] 9 kube-system pods found
	I1128 00:48:37.861086   45580 system_pods.go:89] "coredns-5dd5756b68-6n54l" [bb59175d-e2d9-4c98-9940-b705fa76512f] Running
	I1128 00:48:37.861095   45580 system_pods.go:89] "coredns-5dd5756b68-kjg5f" [bf956dfb-3a7f-4605-a849-ee887562fce5] Running
	I1128 00:48:37.861101   45580 system_pods.go:89] "etcd-embed-certs-304541" [7726ea36-d2a2-4ba8-ad20-e892b0c0059c] Running
	I1128 00:48:37.861108   45580 system_pods.go:89] "kube-apiserver-embed-certs-304541" [340e8023-afd3-4105-b513-3f232dfbd370] Running
	I1128 00:48:37.861116   45580 system_pods.go:89] "kube-controller-manager-embed-certs-304541" [ddba15be-e7c2-4cea-9256-1d7e6ea7b017] Running
	I1128 00:48:37.861122   45580 system_pods.go:89] "kube-proxy-w5ct2" [b3ac66db-fe8d-419d-9237-b0dd4077559a] Running
	I1128 00:48:37.861128   45580 system_pods.go:89] "kube-scheduler-embed-certs-304541" [30830958-963d-4571-8e47-acc169506ead] Running
	I1128 00:48:37.861140   45580 system_pods.go:89] "metrics-server-57f55c9bc5-xzz2t" [926e9a40-f0fe-47ea-8e44-6816132ec0c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:37.861157   45580 system_pods.go:89] "storage-provisioner" [c62a8419-b0e5-4330-a49b-986693e183b2] Running
	I1128 00:48:37.861171   45580 system_pods.go:126] duration metric: took 204.720501ms to wait for k8s-apps to be running ...
	I1128 00:48:37.861187   45580 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:37.861241   45580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:37.875344   45580 system_svc.go:56] duration metric: took 14.150294ms WaitForService to wait for kubelet.
	I1128 00:48:37.875380   45580 kubeadm.go:581] duration metric: took 5.872335245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:37.875407   45580 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:38.057075   45580 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:38.057106   45580 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:38.057117   45580 node_conditions.go:105] duration metric: took 181.705529ms to run NodePressure ...
	I1128 00:48:38.057127   45580 start.go:228] waiting for startup goroutines ...
	I1128 00:48:38.057133   45580 start.go:233] waiting for cluster config update ...
	I1128 00:48:38.057141   45580 start.go:242] writing updated cluster config ...
	I1128 00:48:38.057366   45580 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:38.107014   45580 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:38.109071   45580 out.go:177] * Done! kubectl is now configured to use "embed-certs-304541" cluster and "default" namespace by default
	I1128 00:48:37.626062   46126 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:38.819130   46126 pod_ready.go:81] duration metric: took 4m0.000531461s waiting for pod "metrics-server-57f55c9bc5-fk9xx" in "kube-system" namespace to be "Ready" ...
	E1128 00:48:38.819159   46126 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:48:38.819168   46126 pod_ready.go:38] duration metric: took 4m5.602220781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:48:38.819189   46126 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:48:38.819216   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:38.819269   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:38.882052   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:38.882075   46126 cri.go:89] found id: ""
	I1128 00:48:38.882084   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:38.882143   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.886688   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:38.886751   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:38.926163   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:38.926190   46126 cri.go:89] found id: ""
	I1128 00:48:38.926197   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:38.926259   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.930505   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:38.930558   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:38.979793   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:38.979816   46126 cri.go:89] found id: ""
	I1128 00:48:38.979823   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:38.979876   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:38.984146   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:38.984244   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:39.033485   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:39.033509   46126 cri.go:89] found id: ""
	I1128 00:48:39.033519   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:39.033575   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.038977   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:39.039038   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:39.079669   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:39.079697   46126 cri.go:89] found id: ""
	I1128 00:48:39.079707   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:39.079767   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.084447   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:39.084515   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:39.121494   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:39.121523   46126 cri.go:89] found id: ""
	I1128 00:48:39.121533   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:39.121594   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.126495   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:39.126554   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:39.168822   46126 cri.go:89] found id: ""
	I1128 00:48:39.168851   46126 logs.go:284] 0 containers: []
	W1128 00:48:39.168862   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:39.168869   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:39.168924   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:39.213834   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.213859   46126 cri.go:89] found id: ""
	I1128 00:48:39.213869   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:39.213914   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:39.218746   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:39.218772   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:39.232098   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:39.232127   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:39.373641   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:39.373674   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:39.451311   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:39.451349   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:39.498219   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:39.498247   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:39.952276   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:39.952314   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:40.008385   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:40.008425   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:40.052409   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:40.052443   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:40.092943   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:40.092978   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:40.135490   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:40.135520   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:40.189756   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:40.189793   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:40.242615   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:40.242643   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:37.415898   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:39.910954   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:42.802428   46126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:48:42.818606   46126 api_server.go:72] duration metric: took 4m14.508070703s to wait for apiserver process to appear ...
	I1128 00:48:42.818632   46126 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:48:42.818667   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:42.818721   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:42.872566   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:42.872603   46126 cri.go:89] found id: ""
	I1128 00:48:42.872613   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:42.872675   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.878165   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:42.878232   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:42.924667   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:42.924689   46126 cri.go:89] found id: ""
	I1128 00:48:42.924699   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:42.924772   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.929748   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:42.929809   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:42.977787   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:42.977815   46126 cri.go:89] found id: ""
	I1128 00:48:42.977825   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:42.977887   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:42.982991   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:42.983071   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:43.032835   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.032866   46126 cri.go:89] found id: ""
	I1128 00:48:43.032876   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:43.032933   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.038635   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:43.038711   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:43.084051   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.084080   46126 cri.go:89] found id: ""
	I1128 00:48:43.084090   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:43.084161   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.088908   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:43.088976   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:43.130640   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.130666   46126 cri.go:89] found id: ""
	I1128 00:48:43.130676   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:43.130738   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.135354   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:43.135434   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:43.179655   46126 cri.go:89] found id: ""
	I1128 00:48:43.179690   46126 logs.go:284] 0 containers: []
	W1128 00:48:43.179699   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:43.179705   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:43.179770   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:43.228309   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.228335   46126 cri.go:89] found id: ""
	I1128 00:48:43.228343   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:43.228404   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:43.233343   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:43.233375   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:43.247396   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:43.247430   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:43.386131   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:43.386181   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:43.463228   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:43.463275   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:43.519469   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:43.519511   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:43.581402   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:43.581437   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:43.641804   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:43.641844   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:43.707768   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:43.707807   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:43.779636   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:43.779673   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:43.822939   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:43.822972   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:43.869304   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:43.869344   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:43.917500   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:43.917528   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:46.886551   46126 api_server.go:253] Checking apiserver healthz at https://192.168.72.242:8444/healthz ...
	I1128 00:48:46.892696   46126 api_server.go:279] https://192.168.72.242:8444/healthz returned 200:
	ok
	I1128 00:48:46.894400   46126 api_server.go:141] control plane version: v1.28.4
	I1128 00:48:46.894424   46126 api_server.go:131] duration metric: took 4.075784232s to wait for apiserver health ...
	I1128 00:48:46.894433   46126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:48:46.894455   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:48:46.894492   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:48:46.939259   46126 cri.go:89] found id: "a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:46.939291   46126 cri.go:89] found id: ""
	I1128 00:48:46.939302   46126 logs.go:284] 1 containers: [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6]
	I1128 00:48:46.939364   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.946934   46126 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:48:46.947012   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:48:46.989896   46126 cri.go:89] found id: "0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:46.989920   46126 cri.go:89] found id: ""
	I1128 00:48:46.989930   46126 logs.go:284] 1 containers: [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c]
	I1128 00:48:46.989988   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:46.994923   46126 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:48:46.994994   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:48:47.040298   46126 cri.go:89] found id: "02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.040330   46126 cri.go:89] found id: ""
	I1128 00:48:47.040339   46126 logs.go:284] 1 containers: [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b]
	I1128 00:48:47.040396   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.045041   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:48:47.045113   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:48:47.093254   46126 cri.go:89] found id: "032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.093282   46126 cri.go:89] found id: ""
	I1128 00:48:47.093290   46126 logs.go:284] 1 containers: [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193]
	I1128 00:48:47.093345   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.097856   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:48:47.097916   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:48:47.150763   46126 cri.go:89] found id: "2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.150790   46126 cri.go:89] found id: ""
	I1128 00:48:47.150800   46126 logs.go:284] 1 containers: [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55]
	I1128 00:48:47.150855   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.155272   46126 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:48:47.155348   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:48:47.203549   46126 cri.go:89] found id: "cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.203586   46126 cri.go:89] found id: ""
	I1128 00:48:47.203600   46126 logs.go:284] 1 containers: [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64]
	I1128 00:48:47.203670   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.209313   46126 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:48:47.209384   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:48:42.410241   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:44.909607   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:46.893894   45815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (16.438515297s)
	I1128 00:48:46.893965   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:46.909967   45815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:48:46.919457   45815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:48:46.928580   45815 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:48:46.928629   45815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 00:48:46.989655   45815 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 00:48:46.989772   45815 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:48:47.162717   45815 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:48:47.162868   45815 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:48:47.163002   45815 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:48:47.453392   45815 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:48:47.455125   45815 out.go:204]   - Generating certificates and keys ...
	I1128 00:48:47.455291   45815 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:48:47.455388   45815 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:48:47.455530   45815 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:48:47.455605   45815 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:48:47.456116   45815 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:48:47.456786   45815 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:48:47.457320   45815 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:48:47.457814   45815 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:48:47.458228   45815 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:48:47.458584   45815 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:48:47.458984   45815 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:48:47.459080   45815 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:48:47.654823   45815 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:48:47.858053   45815 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 00:48:48.006981   45815 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:48:48.256244   45815 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:48:48.381440   45815 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:48:48.381976   45815 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:48:48.384696   45815 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:48:48.386824   45815 out.go:204]   - Booting up control plane ...
	I1128 00:48:48.386943   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:48:48.387057   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:48:48.387155   45815 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:48:48.404036   45815 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:48:48.408139   45815 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:48:48.408584   45815 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 00:48:48.539731   45815 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:48:47.259312   46126 cri.go:89] found id: ""
	I1128 00:48:47.259343   46126 logs.go:284] 0 containers: []
	W1128 00:48:47.259353   46126 logs.go:286] No container was found matching "kindnet"
	I1128 00:48:47.259361   46126 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:48:47.259421   46126 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:48:47.308650   46126 cri.go:89] found id: "fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.308681   46126 cri.go:89] found id: ""
	I1128 00:48:47.308692   46126 logs.go:284] 1 containers: [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc]
	I1128 00:48:47.308764   46126 ssh_runner.go:195] Run: which crictl
	I1128 00:48:47.313702   46126 logs.go:123] Gathering logs for dmesg ...
	I1128 00:48:47.313727   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:48:47.327753   46126 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:48:47.327788   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:48:47.490493   46126 logs.go:123] Gathering logs for etcd [0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c] ...
	I1128 00:48:47.490525   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c0deffc33b75ae33dd2abb0bcb0e0d278db412717f9bbc0c8db248964b8008c"
	I1128 00:48:47.554064   46126 logs.go:123] Gathering logs for coredns [02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b] ...
	I1128 00:48:47.554097   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02084fe546b602a674eea090825275300589bf3b70fca970deae32e68596919b"
	I1128 00:48:47.604401   46126 logs.go:123] Gathering logs for kube-proxy [2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55] ...
	I1128 00:48:47.604433   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d6fefc920655efc1f7449b6e8d433263e0cb62d08d082f42d7c9f807f916e55"
	I1128 00:48:47.643173   46126 logs.go:123] Gathering logs for kube-controller-manager [cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64] ...
	I1128 00:48:47.643211   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf1978d16c71e95deabd84d17e449e676599d91286ac6400968c6bf3b7a9a64"
	I1128 00:48:47.707400   46126 logs.go:123] Gathering logs for storage-provisioner [fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc] ...
	I1128 00:48:47.707432   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe8f8f443aabeb3e155e394872d16a652f28cddcf228e518ccc61b4ff7f90ebc"
	I1128 00:48:47.763831   46126 logs.go:123] Gathering logs for container status ...
	I1128 00:48:47.763860   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:48:47.817244   46126 logs.go:123] Gathering logs for kubelet ...
	I1128 00:48:47.817278   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:48:47.872462   46126 logs.go:123] Gathering logs for kube-apiserver [a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6] ...
	I1128 00:48:47.872499   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a108c17df3e3ae93eadddaf655720149ce5220623874f85d575f6887d99237c6"
	I1128 00:48:47.930695   46126 logs.go:123] Gathering logs for kube-scheduler [032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193] ...
	I1128 00:48:47.930729   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032c85dd651d9005d39f748f93408a750ef1160626bfb48c0d8be69ff9f6f193"
	I1128 00:48:47.987718   46126 logs.go:123] Gathering logs for CRI-O ...
	I1128 00:48:47.987748   46126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 00:48:50.856470   46126 system_pods.go:59] 8 kube-system pods found
	I1128 00:48:50.856510   46126 system_pods.go:61] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.856518   46126 system_pods.go:61] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.856525   46126 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.856533   46126 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.856539   46126 system_pods.go:61] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.856545   46126 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.856558   46126 system_pods.go:61] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.856571   46126 system_pods.go:61] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.856579   46126 system_pods.go:74] duration metric: took 3.962140088s to wait for pod list to return data ...
	I1128 00:48:50.856589   46126 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:48:50.859308   46126 default_sa.go:45] found service account: "default"
	I1128 00:48:50.859338   46126 default_sa.go:55] duration metric: took 2.741136ms for default service account to be created ...
	I1128 00:48:50.859347   46126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:48:50.865347   46126 system_pods.go:86] 8 kube-system pods found
	I1128 00:48:50.865371   46126 system_pods.go:89] "coredns-5dd5756b68-n7qpb" [d027f799-6ced-488e-a4f7-6df351193c64] Running
	I1128 00:48:50.865377   46126 system_pods.go:89] "etcd-default-k8s-diff-port-488423" [55bf80da-df13-4429-962c-7fdb5ab44ea8] Running
	I1128 00:48:50.865382   46126 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-488423" [88715645-e98e-42be-ad99-cc7711605abc] Running
	I1128 00:48:50.865387   46126 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-488423" [07935350-12e0-4e86-8f88-7e03890aa417] Running
	I1128 00:48:50.865391   46126 system_pods.go:89] "kube-proxy-2sfbm" [8d92ac1f-4070-4000-9bc6-3d277e0c8c6e] Running
	I1128 00:48:50.865395   46126 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-488423" [42baed98-6b29-4f33-8bb3-df082a1b36ce] Running
	I1128 00:48:50.865405   46126 system_pods.go:89] "metrics-server-57f55c9bc5-fk9xx" [8b0d0cd6-41c5-4b67-98f9-f046e959e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:48:50.865413   46126 system_pods.go:89] "storage-provisioner" [f1e6e7d1-86aa-403c-b753-2b94beb7d7b1] Running
	I1128 00:48:50.865425   46126 system_pods.go:126] duration metric: took 6.071837ms to wait for k8s-apps to be running ...
	I1128 00:48:50.865441   46126 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:48:50.865490   46126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:48:50.882729   46126 system_svc.go:56] duration metric: took 17.277766ms WaitForService to wait for kubelet.
	I1128 00:48:50.882767   46126 kubeadm.go:581] duration metric: took 4m22.572235871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:48:50.882796   46126 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:48:50.886638   46126 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:48:50.886671   46126 node_conditions.go:123] node cpu capacity is 2
	I1128 00:48:50.886684   46126 node_conditions.go:105] duration metric: took 3.881703ms to run NodePressure ...
	I1128 00:48:50.886699   46126 start.go:228] waiting for startup goroutines ...
	I1128 00:48:50.886712   46126 start.go:233] waiting for cluster config update ...
	I1128 00:48:50.886725   46126 start.go:242] writing updated cluster config ...
	I1128 00:48:50.886995   46126 ssh_runner.go:195] Run: rm -f paused
	I1128 00:48:50.947562   46126 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 00:48:50.949119   46126 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-488423" cluster and "default" namespace by default
	I1128 00:48:47.419653   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:49.909410   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:51.909739   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:53.910387   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.408786   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:56.542000   45815 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002009 seconds
	I1128 00:48:56.567203   45815 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:48:56.583239   45815 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:48:57.114661   45815 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:48:57.114917   45815 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-473615 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 00:48:57.633030   45815 kubeadm.go:322] [bootstrap-token] Using token: vz7ey4.v2qfoncp2ok7nh54
	I1128 00:48:57.634835   45815 out.go:204]   - Configuring RBAC rules ...
	I1128 00:48:57.634961   45815 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:48:57.640535   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 00:48:57.653911   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:48:57.658740   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:48:57.662927   45815 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:48:57.667238   45815 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:48:57.688281   45815 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 00:48:57.949630   45815 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:48:58.055744   45815 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:48:58.057024   45815 kubeadm.go:322] 
	I1128 00:48:58.057159   45815 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:48:58.057179   45815 kubeadm.go:322] 
	I1128 00:48:58.057290   45815 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:48:58.057310   45815 kubeadm.go:322] 
	I1128 00:48:58.057343   45815 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:48:58.057431   45815 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:48:58.057518   45815 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:48:58.057536   45815 kubeadm.go:322] 
	I1128 00:48:58.057601   45815 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 00:48:58.057609   45815 kubeadm.go:322] 
	I1128 00:48:58.057673   45815 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 00:48:58.057678   45815 kubeadm.go:322] 
	I1128 00:48:58.057719   45815 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:48:58.057787   45815 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:48:58.057841   45815 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:48:58.057844   45815 kubeadm.go:322] 
	I1128 00:48:58.057921   45815 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 00:48:58.057987   45815 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:48:58.057991   45815 kubeadm.go:322] 
	I1128 00:48:58.058062   45815 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058148   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:48:58.058183   45815 kubeadm.go:322] 	--control-plane 
	I1128 00:48:58.058198   45815 kubeadm.go:322] 
	I1128 00:48:58.058266   45815 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:48:58.058272   45815 kubeadm.go:322] 
	I1128 00:48:58.058347   45815 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vz7ey4.v2qfoncp2ok7nh54 \
	I1128 00:48:58.058449   45815 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:48:58.059375   45815 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:48:58.059404   45815 cni.go:84] Creating CNI manager for ""
	I1128 00:48:58.059415   45815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:48:58.061524   45815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:48:58.062981   45815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:48:58.121061   45815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:48:58.143978   45815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:48:58.144060   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.144068   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=no-preload-473615 minikube.k8s.io/updated_at=2023_11_28T00_48_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.495592   45815 ops.go:34] apiserver oom_adj: -16
	I1128 00:48:58.495756   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.590073   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:58.412254   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:00.912329   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:48:59.189174   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:48:59.688440   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.189285   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:00.688724   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.189197   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:01.688512   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.189219   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:02.689235   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.189405   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.689243   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:03.414190   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:05.909164   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:04.188645   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:04.688928   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.189330   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:05.689126   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.189257   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:06.688476   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.189386   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:07.689051   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.188961   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:08.689080   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.188591   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:09.688502   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.188492   45815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:10.303728   45815 kubeadm.go:1081] duration metric: took 12.159747313s to wait for elevateKubeSystemPrivileges.
	I1128 00:49:10.303773   45815 kubeadm.go:406] StartCluster complete in 5m13.413969558s
	I1128 00:49:10.303794   45815 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.303880   45815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:49:10.306274   45815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:49:10.306559   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:49:10.306678   45815 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:49:10.306764   45815 addons.go:69] Setting storage-provisioner=true in profile "no-preload-473615"
	I1128 00:49:10.306786   45815 addons.go:231] Setting addon storage-provisioner=true in "no-preload-473615"
	W1128 00:49:10.306799   45815 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:49:10.306822   45815 config.go:182] Loaded profile config "no-preload-473615": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:49:10.306844   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.306903   45815 addons.go:69] Setting default-storageclass=true in profile "no-preload-473615"
	I1128 00:49:10.306924   45815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-473615"
	I1128 00:49:10.307065   45815 addons.go:69] Setting metrics-server=true in profile "no-preload-473615"
	I1128 00:49:10.307089   45815 addons.go:231] Setting addon metrics-server=true in "no-preload-473615"
	W1128 00:49:10.307097   45815 addons.go:240] addon metrics-server should already be in state true
	I1128 00:49:10.307140   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.307283   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307284   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307366   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307313   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.307600   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.307650   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.323788   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I1128 00:49:10.324333   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.324915   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.324940   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.325212   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I1128 00:49:10.325655   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.325825   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326138   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.326156   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.326346   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326375   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.326504   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.326968   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.326991   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.330263   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I1128 00:49:10.331124   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.331538   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.331559   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.331951   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.332131   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.335360   45815 addons.go:231] Setting addon default-storageclass=true in "no-preload-473615"
	W1128 00:49:10.335378   45815 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:49:10.335405   45815 host.go:66] Checking if "no-preload-473615" exists ...
	I1128 00:49:10.335685   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.335715   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.346750   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1128 00:49:10.346822   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I1128 00:49:10.347279   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347400   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.347703   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347731   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347906   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.347919   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.347983   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348096   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.348232   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.348429   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.350025   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.352544   45815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:49:10.350506   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.355541   45815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:49:10.354491   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:49:10.356963   45815 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.356980   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:49:10.356993   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.355570   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:49:10.357068   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.356139   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1128 00:49:10.356295   45815 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-473615" context rescaled to 1 replicas
	I1128 00:49:10.357149   45815 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.195 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:49:10.358543   45815 out.go:177] * Verifying Kubernetes components...
	I1128 00:49:10.359926   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:10.357719   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.360555   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.360575   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.361020   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.361318   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361551   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.361574   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.361736   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.361938   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.362037   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362129   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.362295   45815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:49:10.362317   45815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:49:10.362381   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.362676   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.362699   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.362961   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.363188   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.363360   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.363499   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.381194   45815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1128 00:49:10.381543   45815 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:49:10.382012   45815 main.go:141] libmachine: Using API Version  1
	I1128 00:49:10.382032   45815 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:49:10.382399   45815 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:49:10.382584   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetState
	I1128 00:49:10.384269   45815 main.go:141] libmachine: (no-preload-473615) Calling .DriverName
	I1128 00:49:10.384500   45815 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.384513   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:49:10.384527   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHHostname
	I1128 00:49:10.387448   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388000   45815 main.go:141] libmachine: (no-preload-473615) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:93:0d", ip: ""} in network mk-no-preload-473615: {Iface:virbr4 ExpiryTime:2023-11-28 01:43:29 +0000 UTC Type:0 Mac:52:54:00:bb:93:0d Iaid: IPaddr:192.168.61.195 Prefix:24 Hostname:no-preload-473615 Clientid:01:52:54:00:bb:93:0d}
	I1128 00:49:10.388027   45815 main.go:141] libmachine: (no-preload-473615) DBG | domain no-preload-473615 has defined IP address 192.168.61.195 and MAC address 52:54:00:bb:93:0d in network mk-no-preload-473615
	I1128 00:49:10.388169   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHPort
	I1128 00:49:10.388335   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHKeyPath
	I1128 00:49:10.388477   45815 main.go:141] libmachine: (no-preload-473615) Calling .GetSSHUsername
	I1128 00:49:10.388578   45815 sshutil.go:53] new ssh client: &{IP:192.168.61.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/no-preload-473615/id_rsa Username:docker}
	I1128 00:49:10.513157   45815 node_ready.go:35] waiting up to 6m0s for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.513251   45815 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:49:10.546158   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:49:10.566225   45815 node_ready.go:49] node "no-preload-473615" has status "Ready":"True"
	I1128 00:49:10.566248   45815 node_ready.go:38] duration metric: took 53.063342ms waiting for node "no-preload-473615" to be "Ready" ...
	I1128 00:49:10.566259   45815 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:10.589374   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:49:10.589400   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:49:10.608085   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:49:10.657717   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:49:10.657746   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:49:10.693300   45815 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.745796   45815 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.745821   45815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:49:10.820139   45815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:49:10.848411   45815 pod_ready.go:92] pod "etcd-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:10.848444   45815 pod_ready.go:81] duration metric: took 155.116855ms waiting for pod "etcd-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:10.848459   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035904   45815 pod_ready.go:92] pod "kube-apiserver-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.035929   45815 pod_ready.go:81] duration metric: took 187.461745ms waiting for pod "kube-apiserver-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.035941   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.269000   45815 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1128 00:49:11.634167   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.087967346s)
	I1128 00:49:11.634213   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026096699s)
	I1128 00:49:11.634226   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634239   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634250   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634272   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634578   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634621   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.634637   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634639   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634649   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634650   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.634656   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634660   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.634595   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634942   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634958   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:11.634986   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635009   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.634989   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.635049   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.657473   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:11.657495   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:11.657814   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:11.657828   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:11.758491   45815 pod_ready.go:92] pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:11.758514   45815 pod_ready.go:81] duration metric: took 722.565796ms waiting for pod "kube-controller-manager-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:11.758525   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:12.084449   45815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.264259029s)
	I1128 00:49:12.084510   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084524   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.084846   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.084865   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.084875   45815 main.go:141] libmachine: Making call to close driver server
	I1128 00:49:12.084870   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.084885   45815 main.go:141] libmachine: (no-preload-473615) Calling .Close
	I1128 00:49:12.085142   45815 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:49:12.085152   45815 main.go:141] libmachine: (no-preload-473615) DBG | Closing plugin on server side
	I1128 00:49:12.085164   45815 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:49:12.085174   45815 addons.go:467] Verifying addon metrics-server=true in "no-preload-473615"
	I1128 00:49:12.087081   45815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 00:49:08.409321   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:10.909836   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:12.088572   45815 addons.go:502] enable addons completed in 1.781896775s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 00:49:13.830651   45815 pod_ready.go:102] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:14.830780   45815 pod_ready.go:92] pod "kube-proxy-bv5lq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.830805   45815 pod_ready.go:81] duration metric: took 3.072274458s waiting for pod "kube-proxy-bv5lq" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.830815   45815 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836248   45815 pod_ready.go:92] pod "kube-scheduler-no-preload-473615" in "kube-system" namespace has status "Ready":"True"
	I1128 00:49:14.836266   45815 pod_ready.go:81] duration metric: took 5.444378ms waiting for pod "kube-scheduler-no-preload-473615" in "kube-system" namespace to be "Ready" ...
	I1128 00:49:14.836273   45815 pod_ready.go:38] duration metric: took 4.270002588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:14.836288   45815 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:49:14.836329   45815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:49:14.860322   45815 api_server.go:72] duration metric: took 4.503144983s to wait for apiserver process to appear ...
	I1128 00:49:14.860354   45815 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:49:14.860375   45815 api_server.go:253] Checking apiserver healthz at https://192.168.61.195:8443/healthz ...
	I1128 00:49:14.866977   45815 api_server.go:279] https://192.168.61.195:8443/healthz returned 200:
	ok
	I1128 00:49:14.868294   45815 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 00:49:14.868318   45815 api_server.go:131] duration metric: took 7.955565ms to wait for apiserver health ...
	I1128 00:49:14.868328   45815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:49:14.875943   45815 system_pods.go:59] 8 kube-system pods found
	I1128 00:49:14.875972   45815 system_pods.go:61] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:14.875979   45815 system_pods.go:61] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:14.875986   45815 system_pods.go:61] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:14.875993   45815 system_pods.go:61] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:14.875999   45815 system_pods.go:61] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:14.876005   45815 system_pods.go:61] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:14.876019   45815 system_pods.go:61] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:14.876031   45815 system_pods.go:61] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:14.876042   45815 system_pods.go:74] duration metric: took 7.70749ms to wait for pod list to return data ...
	I1128 00:49:14.876058   45815 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:49:14.918080   45815 default_sa.go:45] found service account: "default"
	I1128 00:49:14.918107   45815 default_sa.go:55] duration metric: took 42.036279ms for default service account to be created ...
	I1128 00:49:14.918119   45815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:49:15.120338   45815 system_pods.go:86] 8 kube-system pods found
	I1128 00:49:15.120368   45815 system_pods.go:89] "coredns-76f75df574-kbrjg" [881031bb-af46-48a7-b609-7fb1c96b2056] Running
	I1128 00:49:15.120376   45815 system_pods.go:89] "etcd-no-preload-473615" [ae2b57ca-5a22-4f4b-b227-00edfbb3b520] Running
	I1128 00:49:15.120383   45815 system_pods.go:89] "kube-apiserver-no-preload-473615" [9e9104c8-ee9f-4370-b92e-d301ea9cd880] Running
	I1128 00:49:15.120390   45815 system_pods.go:89] "kube-controller-manager-no-preload-473615" [f52dccb6-3d88-44b2-b733-38dd240dffa5] Running
	I1128 00:49:15.120395   45815 system_pods.go:89] "kube-proxy-bv5lq" [fe88f49f-5fc1-4877-a982-38fee04c9e2d] Running
	I1128 00:49:15.120401   45815 system_pods.go:89] "kube-scheduler-no-preload-473615" [8d6a3177-757a-493e-ba5e-265f95d6f462] Running
	I1128 00:49:15.120413   45815 system_pods.go:89] "metrics-server-57f55c9bc5-mpqdq" [8cef6d4c-e932-4c97-8d87-3b4c3777c8b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:49:15.120420   45815 system_pods.go:89] "storage-provisioner" [b8fc9309-7354-44e3-aa10-f4fb3c185f62] Running
	I1128 00:49:15.120437   45815 system_pods.go:126] duration metric: took 202.310611ms to wait for k8s-apps to be running ...
	I1128 00:49:15.120452   45815 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:49:15.120501   45815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:15.134858   45815 system_svc.go:56] duration metric: took 14.396652ms WaitForService to wait for kubelet.
	I1128 00:49:15.134886   45815 kubeadm.go:581] duration metric: took 4.777716544s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:49:15.134902   45815 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:49:15.318344   45815 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:49:15.318370   45815 node_conditions.go:123] node cpu capacity is 2
	I1128 00:49:15.318380   45815 node_conditions.go:105] duration metric: took 183.473974ms to run NodePressure ...
	I1128 00:49:15.318390   45815 start.go:228] waiting for startup goroutines ...
	I1128 00:49:15.318396   45815 start.go:233] waiting for cluster config update ...
	I1128 00:49:15.318405   45815 start.go:242] writing updated cluster config ...
	I1128 00:49:15.318651   45815 ssh_runner.go:195] Run: rm -f paused
	I1128 00:49:15.368036   45815 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 00:49:15.369853   45815 out.go:177] * Done! kubectl is now configured to use "no-preload-473615" cluster and "default" namespace by default
	I1128 00:49:12.909910   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:15.420062   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:17.421038   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:19.909444   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:21.910293   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:24.412962   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:26.908733   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:28.910353   45269 pod_ready.go:102] pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace has status "Ready":"False"
	I1128 00:49:31.104114   45269 pod_ready.go:81] duration metric: took 4m0.000750315s waiting for pod "metrics-server-74d5856cc6-vfkpf" in "kube-system" namespace to be "Ready" ...
	E1128 00:49:31.104164   45269 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:49:31.104219   45269 pod_ready.go:38] duration metric: took 4m1.201800344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:49:31.104258   45269 kubeadm.go:640] restartCluster took 5m3.38216869s
	W1128 00:49:31.104338   45269 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:49:31.104371   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 00:49:35.883236   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.778829992s)
	I1128 00:49:35.883312   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:49:35.898846   45269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:49:35.910716   45269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:49:35.921838   45269 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 00:49:35.921883   45269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 00:49:35.987683   45269 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 00:49:35.987889   45269 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 00:49:36.153771   45269 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 00:49:36.153926   45269 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 00:49:36.154056   45269 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 00:49:36.387112   45269 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 00:49:36.387236   45269 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 00:49:36.394929   45269 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 00:49:36.523951   45269 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 00:49:36.526180   45269 out.go:204]   - Generating certificates and keys ...
	I1128 00:49:36.526284   45269 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 00:49:36.526378   45269 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 00:49:36.526508   45269 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 00:49:36.526603   45269 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 00:49:36.526723   45269 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 00:49:36.526807   45269 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 00:49:36.526928   45269 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 00:49:36.527026   45269 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 00:49:36.527127   45269 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 00:49:36.527671   45269 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 00:49:36.527734   45269 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 00:49:36.527807   45269 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 00:49:36.966756   45269 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 00:49:37.138717   45269 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 00:49:37.307916   45269 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 00:49:37.374115   45269 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 00:49:37.375393   45269 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 00:49:37.377224   45269 out.go:204]   - Booting up control plane ...
	I1128 00:49:37.377338   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 00:49:37.381887   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 00:49:37.383114   45269 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 00:49:37.384032   45269 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 00:49:37.387460   45269 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 00:49:47.893342   45269 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.504508 seconds
	I1128 00:49:47.893497   45269 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 00:49:47.911409   45269 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 00:49:48.437988   45269 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 00:49:48.438226   45269 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-732472 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 00:49:48.947631   45269 kubeadm.go:322] [bootstrap-token] Using token: g2kx2b.r3qu6fui94rrmu2m
	I1128 00:49:48.949581   45269 out.go:204]   - Configuring RBAC rules ...
	I1128 00:49:48.949746   45269 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 00:49:48.960004   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 00:49:48.969068   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 00:49:48.973998   45269 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 00:49:48.982331   45269 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 00:49:49.099721   45269 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 00:49:49.367382   45269 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 00:49:49.369069   45269 kubeadm.go:322] 
	I1128 00:49:49.369159   45269 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 00:49:49.369196   45269 kubeadm.go:322] 
	I1128 00:49:49.369325   45269 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 00:49:49.369339   45269 kubeadm.go:322] 
	I1128 00:49:49.369383   45269 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 00:49:49.369449   45269 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 00:49:49.369519   45269 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 00:49:49.369541   45269 kubeadm.go:322] 
	I1128 00:49:49.369619   45269 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 00:49:49.369725   45269 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 00:49:49.369822   45269 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 00:49:49.369839   45269 kubeadm.go:322] 
	I1128 00:49:49.369975   45269 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 00:49:49.370080   45269 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 00:49:49.370092   45269 kubeadm.go:322] 
	I1128 00:49:49.370202   45269 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370371   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 \
	I1128 00:49:49.370419   45269 kubeadm.go:322]     --control-plane 	  
	I1128 00:49:49.370432   45269 kubeadm.go:322] 
	I1128 00:49:49.370515   45269 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 00:49:49.370527   45269 kubeadm.go:322] 
	I1128 00:49:49.370639   45269 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g2kx2b.r3qu6fui94rrmu2m \
	I1128 00:49:49.370783   45269 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:b6483b681756340ee04e0c560bab2d54ae9ffb57e655ca6b4918ec13f41c33b6 
	I1128 00:49:49.371106   45269 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 00:49:49.371134   45269 cni.go:84] Creating CNI manager for ""
	I1128 00:49:49.371148   45269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 00:49:49.373008   45269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:49:49.374371   45269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:49:49.384861   45269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:49:49.402517   45269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 00:49:49.402582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.402598   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=old-k8s-version-732472 minikube.k8s.io/updated_at=2023_11_28T00_49_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.441523   45269 ops.go:34] apiserver oom_adj: -16
	I1128 00:49:49.674343   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:49.796920   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.420537   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:50.920042   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.420533   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:51.920538   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.420730   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:52.920078   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.420670   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:53.920876   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.420798   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:54.920702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.420180   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:55.920033   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.420702   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:56.920106   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.420244   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:57.920637   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.420226   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:58.920874   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.420228   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:49:59.920070   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.420845   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:00.920883   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.420977   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:01.920275   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.420097   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:02.920582   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.420001   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:03.919906   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.420071   45269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 00:50:04.580992   45269 kubeadm.go:1081] duration metric: took 15.178468662s to wait for elevateKubeSystemPrivileges.
	I1128 00:50:04.581023   45269 kubeadm.go:406] StartCluster complete in 5m36.912120738s
	I1128 00:50:04.581042   45269 settings.go:142] acquiring lock: {Name:mk3bb6e8435310f03569574f6edf7dfe735375ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.581125   45269 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:50:04.582704   45269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/kubeconfig: {Name:mkf37c76aaaa8da775303a81f6d56ef60285f3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:50:04.582966   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 00:50:04.583000   45269 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 00:50:04.583077   45269 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583105   45269 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-732472"
	W1128 00:50:04.583116   45269 addons.go:240] addon storage-provisioner should already be in state true
	I1128 00:50:04.583192   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583206   45269 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583227   45269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-732472"
	I1128 00:50:04.583540   45269 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-732472"
	I1128 00:50:04.583565   45269 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-732472"
	W1128 00:50:04.583573   45269 addons.go:240] addon metrics-server should already be in state true
	I1128 00:50:04.583609   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583635   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.583640   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.583676   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583643   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.583193   45269 config.go:182] Loaded profile config "old-k8s-version-732472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 00:50:04.584015   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.584069   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.602419   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I1128 00:50:04.602558   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I1128 00:50:04.602646   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36113
	I1128 00:50:04.603020   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603118   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603196   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.603571   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603572   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603597   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603611   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603729   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.603753   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.603939   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.603973   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604086   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.604378   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.604489   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604521   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.604617   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.604646   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.608900   45269 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-732472"
	W1128 00:50:04.608925   45269 addons.go:240] addon default-storageclass should already be in state true
	I1128 00:50:04.608953   45269 host.go:66] Checking if "old-k8s-version-732472" exists ...
	I1128 00:50:04.611555   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.611628   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.622409   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33595
	I1128 00:50:04.622446   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
	I1128 00:50:04.622876   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623000   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.623394   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623424   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623534   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.623567   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.623886   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624365   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.624368   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.624556   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.626412   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.626443   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.629006   45269 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 00:50:04.630723   45269 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 00:50:04.632378   45269 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.632395   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 00:50:04.632409   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.630641   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 00:50:04.632467   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 00:50:04.632479   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.632126   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38563
	I1128 00:50:04.633062   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.633666   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.633692   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.634447   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.635020   45269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 00:50:04.635053   45269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 00:50:04.636332   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636387   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636733   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636772   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636795   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.636830   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.636952   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637085   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.637132   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637245   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.637296   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637413   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.637448   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.637594   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	I1128 00:50:04.651941   45269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I1128 00:50:04.652604   45269 main.go:141] libmachine: () Calling .GetVersion
	I1128 00:50:04.653192   45269 main.go:141] libmachine: Using API Version  1
	I1128 00:50:04.653222   45269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 00:50:04.653677   45269 main.go:141] libmachine: () Calling .GetMachineName
	I1128 00:50:04.653838   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetState
	I1128 00:50:04.655532   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .DriverName
	I1128 00:50:04.655848   45269 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.655868   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 00:50:04.655890   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHHostname
	I1128 00:50:04.658852   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659252   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:2b:fd", ip: ""} in network mk-old-k8s-version-732472: {Iface:virbr1 ExpiryTime:2023-11-28 01:33:37 +0000 UTC Type:0 Mac:52:54:00:ff:2b:fd Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:old-k8s-version-732472 Clientid:01:52:54:00:ff:2b:fd}
	I1128 00:50:04.659280   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | domain old-k8s-version-732472 has defined IP address 192.168.39.172 and MAC address 52:54:00:ff:2b:fd in network mk-old-k8s-version-732472
	I1128 00:50:04.659426   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHPort
	I1128 00:50:04.659602   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHKeyPath
	I1128 00:50:04.659971   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .GetSSHUsername
	I1128 00:50:04.660096   45269 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/old-k8s-version-732472/id_rsa Username:docker}
	W1128 00:50:04.792826   45269 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "old-k8s-version-732472" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1128 00:50:04.792863   45269 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1128 00:50:04.792890   45269 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 00:50:04.795799   45269 out.go:177] * Verifying Kubernetes components...
	I1128 00:50:04.797469   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:50:04.870889   45269 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.871024   45269 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 00:50:04.888333   45269 node_ready.go:49] node "old-k8s-version-732472" has status "Ready":"True"
	I1128 00:50:04.888359   45269 node_ready.go:38] duration metric: took 17.44205ms waiting for node "old-k8s-version-732472" to be "Ready" ...
	I1128 00:50:04.888372   45269 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:04.899414   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 00:50:04.902681   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:04.904708   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 00:50:04.904734   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 00:50:04.947930   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 00:50:04.977094   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 00:50:04.977123   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 00:50:05.195712   45269 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:05.195795   45269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 00:50:05.292058   45269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 00:50:06.383144   45269 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.512083846s)
	I1128 00:50:06.383170   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.483727542s)
	I1128 00:50:06.383180   45269 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 00:50:06.383208   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383221   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383572   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383599   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383608   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.383606   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.383618   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.383835   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.383851   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.383870   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.423407   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.423447   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.423758   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.423783   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.423799   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.678261   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.73029562s)
	I1128 00:50:06.678312   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678326   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678640   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678655   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.678663   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.678672   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.678927   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.678955   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762082   45269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46997729s)
	I1128 00:50:06.762140   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762160   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762538   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762557   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762569   45269 main.go:141] libmachine: Making call to close driver server
	I1128 00:50:06.762579   45269 main.go:141] libmachine: (old-k8s-version-732472) Calling .Close
	I1128 00:50:06.762599   45269 main.go:141] libmachine: (old-k8s-version-732472) DBG | Closing plugin on server side
	I1128 00:50:06.762815   45269 main.go:141] libmachine: Successfully made call to close driver server
	I1128 00:50:06.762830   45269 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 00:50:06.762840   45269 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-732472"
	I1128 00:50:06.765825   45269 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 00:50:06.767637   45269 addons.go:502] enable addons completed in 2.184637132s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 00:50:06.959495   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:08.961160   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:11.459984   45269 pod_ready.go:102] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"False"
	I1128 00:50:12.959294   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.959317   45269 pod_ready.go:81] duration metric: took 8.056612005s waiting for pod "coredns-5644d7b6d9-5s84s" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.959326   45269 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973244   45269 pod_ready.go:92] pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.973268   45269 pod_ready.go:81] duration metric: took 13.936307ms waiting for pod "coredns-5644d7b6d9-fsfpw" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.973278   45269 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980471   45269 pod_ready.go:92] pod "kube-proxy-88chq" in "kube-system" namespace has status "Ready":"True"
	I1128 00:50:12.980489   45269 pod_ready.go:81] duration metric: took 7.20414ms waiting for pod "kube-proxy-88chq" in "kube-system" namespace to be "Ready" ...
	I1128 00:50:12.980496   45269 pod_ready.go:38] duration metric: took 8.092113593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:50:12.980511   45269 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:50:12.980554   45269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:50:12.996604   45269 api_server.go:72] duration metric: took 8.203675443s to wait for apiserver process to appear ...
	I1128 00:50:12.996645   45269 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:50:12.996670   45269 api_server.go:253] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I1128 00:50:13.006987   45269 api_server.go:279] https://192.168.39.172:8443/healthz returned 200:
	ok
	I1128 00:50:13.007986   45269 api_server.go:141] control plane version: v1.16.0
	I1128 00:50:13.008003   45269 api_server.go:131] duration metric: took 11.352257ms to wait for apiserver health ...
	I1128 00:50:13.008010   45269 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:50:13.013658   45269 system_pods.go:59] 5 kube-system pods found
	I1128 00:50:13.013677   45269 system_pods.go:61] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.013682   45269 system_pods.go:61] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.013686   45269 system_pods.go:61] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.013693   45269 system_pods.go:61] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.013697   45269 system_pods.go:61] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.013703   45269 system_pods.go:74] duration metric: took 5.688575ms to wait for pod list to return data ...
	I1128 00:50:13.013710   45269 default_sa.go:34] waiting for default service account to be created ...
	I1128 00:50:13.016210   45269 default_sa.go:45] found service account: "default"
	I1128 00:50:13.016228   45269 default_sa.go:55] duration metric: took 2.513069ms for default service account to be created ...
	I1128 00:50:13.016234   45269 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 00:50:13.020464   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.020488   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.020496   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.020502   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.020513   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.020522   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.020544   45269 retry.go:31] will retry after 244.092512ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.270858   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.270893   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.270901   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.270907   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.270918   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.270926   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.270946   45269 retry.go:31] will retry after 311.602199ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.588013   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.588041   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.588047   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.588051   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.588057   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.588062   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.588076   45269 retry.go:31] will retry after 298.08088ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:13.891272   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:13.891302   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:13.891307   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:13.891311   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:13.891318   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:13.891323   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:13.891339   45269 retry.go:31] will retry after 474.390305ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:14.371201   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:14.371230   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:14.371236   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:14.371241   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:14.371248   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:14.371253   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:14.371269   45269 retry.go:31] will retry after 719.510586ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.096817   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.096846   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.096851   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.096855   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.096862   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.096866   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.096881   45269 retry.go:31] will retry after 684.457384ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:15.786918   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:15.786947   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:15.786952   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:15.786956   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:15.786962   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:15.786967   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:15.786982   45269 retry.go:31] will retry after 721.543291ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:16.513230   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:16.513258   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:16.513263   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:16.513268   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:16.513275   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:16.513280   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:16.513296   45269 retry.go:31] will retry after 1.405502561s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:17.926572   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:17.926610   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:17.926619   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:17.926626   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:17.926636   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:17.926642   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:17.926662   45269 retry.go:31] will retry after 1.65088536s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:19.584099   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:19.584130   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:19.584136   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:19.584140   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:19.584147   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:19.584152   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:19.584168   45269 retry.go:31] will retry after 1.660488369s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:21.250659   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:21.250706   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:21.250714   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:21.250719   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:21.250729   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:21.250736   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:21.250757   45269 retry.go:31] will retry after 1.762203818s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:23.018771   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:23.018798   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:23.018804   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:23.018808   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:23.018815   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:23.018819   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:23.018837   45269 retry.go:31] will retry after 2.558255345s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:25.584363   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:25.584394   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:25.584402   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:25.584409   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:25.584417   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:25.584422   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:25.584446   45269 retry.go:31] will retry after 4.457632402s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:30.049343   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:30.049374   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:30.049381   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:30.049388   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:30.049398   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:30.049406   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:30.049426   45269 retry.go:31] will retry after 5.077489821s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:35.133974   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:35.134001   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:35.134006   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:35.134010   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:35.134022   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:35.134029   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:35.134048   45269 retry.go:31] will retry after 5.675627515s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:40.814779   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:40.814808   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:40.814814   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:40.814818   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:40.814825   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:40.814829   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:40.814846   45269 retry.go:31] will retry after 5.701774609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:46.524426   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:46.524467   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:46.524475   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:46.524482   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:46.524492   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:46.524499   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:46.524521   45269 retry.go:31] will retry after 7.322045517s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:50:53.852348   45269 system_pods.go:86] 5 kube-system pods found
	I1128 00:50:53.852378   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:50:53.852387   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:50:53.852394   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:50:53.852406   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:50:53.852413   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:50:53.852442   45269 retry.go:31] will retry after 12.532542473s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:06.392828   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:06.392858   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:06.392863   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:06.392872   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Pending
	I1128 00:51:06.392876   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Pending
	I1128 00:51:06.392882   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Pending
	I1128 00:51:06.392886   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:06.392889   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Pending
	I1128 00:51:06.392897   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:06.392901   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:06.392915   45269 retry.go:31] will retry after 10.519018157s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 00:51:16.918264   45269 system_pods.go:86] 9 kube-system pods found
	I1128 00:51:16.918303   45269 system_pods.go:89] "coredns-5644d7b6d9-5s84s" [4388650c-3956-44bf-86ea-6b64743166ca] Running
	I1128 00:51:16.918311   45269 system_pods.go:89] "coredns-5644d7b6d9-fsfpw" [a466ce19-debe-424d-9eec-00513557472b] Running
	I1128 00:51:16.918319   45269 system_pods.go:89] "etcd-old-k8s-version-732472" [b839e564-30b4-4ddf-a7af-15a11ae6caaf] Running
	I1128 00:51:16.918326   45269 system_pods.go:89] "kube-apiserver-old-k8s-version-732472" [7f8f59a8-21fb-4161-ba13-c123b21f74cb] Running
	I1128 00:51:16.918333   45269 system_pods.go:89] "kube-controller-manager-old-k8s-version-732472" [0271d0e4-295a-47fc-a42f-77a8f9d71930] Running
	I1128 00:51:16.918340   45269 system_pods.go:89] "kube-proxy-88chq" [273e27bd-a4a8-4fa9-913a-a67ee5a80990] Running
	I1128 00:51:16.918346   45269 system_pods.go:89] "kube-scheduler-old-k8s-version-732472" [a22ecb05-e88d-4fc4-8e16-df419a9564e3] Running
	I1128 00:51:16.918360   45269 system_pods.go:89] "metrics-server-74d5856cc6-nd9qp" [de534eb9-4a5c-400d-ba7c-da4bc1bef670] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:51:16.918375   45269 system_pods.go:89] "storage-provisioner" [9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e] Running
	I1128 00:51:16.918386   45269 system_pods.go:126] duration metric: took 1m3.902146285s to wait for k8s-apps to be running ...
	I1128 00:51:16.918398   45269 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 00:51:16.918445   45269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 00:51:16.937522   45269 system_svc.go:56] duration metric: took 19.116204ms WaitForService to wait for kubelet.
	I1128 00:51:16.937556   45269 kubeadm.go:581] duration metric: took 1m12.144633009s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 00:51:16.937577   45269 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:51:16.941812   45269 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 00:51:16.941838   45269 node_conditions.go:123] node cpu capacity is 2
	I1128 00:51:16.941849   45269 node_conditions.go:105] duration metric: took 4.264769ms to run NodePressure ...
	I1128 00:51:16.941859   45269 start.go:228] waiting for startup goroutines ...
	I1128 00:51:16.941865   45269 start.go:233] waiting for cluster config update ...
	I1128 00:51:16.941874   45269 start.go:242] writing updated cluster config ...
	I1128 00:51:16.942150   45269 ssh_runner.go:195] Run: rm -f paused
	I1128 00:51:16.992567   45269 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 00:51:16.994677   45269 out.go:177] 
	W1128 00:51:16.996083   45269 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 00:51:16.997442   45269 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 00:51:16.998644   45269 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-732472" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 00:44:09 UTC, ends at Tue 2023-11-28 01:03:21 UTC. --
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.714937489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dbaff1f3-3f57-4d20-8d46-65505d84f6ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.753179012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7c05a6ad-c552-410f-9de0-2c9ce3cba54d name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.753264306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7c05a6ad-c552-410f-9de0-2c9ce3cba54d name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.754876816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e202635b-c461-4eed-be9d-b1f13675f7dd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.755248603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133401755235510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=e202635b-c461-4eed-be9d-b1f13675f7dd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.755795925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f68cc81c-c983-425d-98d3-a148ae154430 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.755843896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f68cc81c-c983-425d-98d3-a148ae154430 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.756059335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f68cc81c-c983-425d-98d3-a148ae154430 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.790580194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=82a7c85e-c937-4406-bc6b-f7ad64a88a75 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.790644231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=82a7c85e-c937-4406-bc6b-f7ad64a88a75 name=/runtime.v1.RuntimeService/Version
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.792081125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=faa44d7c-da87-403f-9543-3c985adeb5c2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.792558753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701133401792544176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=faa44d7c-da87-403f-9543-3c985adeb5c2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.793422598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=edbf0ae1-6f67-4186-a992-f04858bab7f5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.793474479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=edbf0ae1-6f67-4186-a992-f04858bab7f5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.793719139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=edbf0ae1-6f67-4186-a992-f04858bab7f5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.801515221Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9e8d1bb9-90b9-4008-b650-58ce1ff537d5 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.801877643Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ef148471e3870002824b796b37f4f030af3565f02a7076cd6ab0f0a5e1fb03e7,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-nd9qp,Uid:de534eb9-4a5c-400d-ba7c-da4bc1bef670,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132607917896538,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-nd9qp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de534eb9-4a5c-400d-ba7c-da4bc1bef670,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:07.566313946Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9f880d43-3a6e-4eed-8f26-1a1ca9bdc6
0e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132607033839400,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-28T00:50:06.684722076Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-fsfpw,Uid:a466ce19-debe-424d-9eec-00513557472b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132605104635324,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.739557485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-5s84s,Uid:4388650c-3956-
44bf-86ea-6b64743166ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132604996764648,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.635768318Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&PodSandboxMetadata{Name:kube-proxy-88chq,Uid:273e27bd-a4a8-4fa9-913a-a67ee5a80990,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132604452608809,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,k8s-app: kube-proxy,pod-
template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.105481854Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-732472,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578316458283,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-11-28T00:49:37.793690103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&
PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-732472,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578304542594,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-11-28T00:49:37.793691239Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-732472,Uid:f3e287dac636cd18fa651d2219ad4ea9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578254103069,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8
s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f3e287dac636cd18fa651d2219ad4ea9,kubernetes.io/config.seen: 2023-11-28T00:49:37.793681603Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-732472,Uid:a4b9e9d536b8786f0dbde3fec6faabba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578249064316,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a4b9e9d536b8786f0dbde3fec6faabba,kubernetes.io/config.seen: 2023-11-28T00:49:37.793688315Z
,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=9e8d1bb9-90b9-4008-b650-58ce1ff537d5 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.802598435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a22a7858-8a71-4432-8c05-2bbbe7e8821b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.802667676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a22a7858-8a71-4432-8c05-2bbbe7e8821b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.802867165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a22a7858-8a71-4432-8c05-2bbbe7e8821b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.803690690Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c9c8763e-32ad-4323-84bc-a79e13d042b6 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.804487511Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ef148471e3870002824b796b37f4f030af3565f02a7076cd6ab0f0a5e1fb03e7,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-nd9qp,Uid:de534eb9-4a5c-400d-ba7c-da4bc1bef670,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132607917896538,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-nd9qp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de534eb9-4a5c-400d-ba7c-da4bc1bef670,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:07.566313946Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9f880d43-3a6e-4eed-8f26-1a1ca9bdc6
0e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132607033839400,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-28T00:50:06.684722076Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-fsfpw,Uid:a466ce19-debe-424d-9eec-00513557472b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132605104635324,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.739557485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-5s84s,Uid:4388650c-3956-
44bf-86ea-6b64743166ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132604996764648,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.635768318Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&PodSandboxMetadata{Name:kube-proxy-88chq,Uid:273e27bd-a4a8-4fa9-913a-a67ee5a80990,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132604452608809,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,k8s-app: kube-proxy,pod-
template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T00:50:04.105481854Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-732472,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578316458283,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-11-28T00:49:37.793690103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&
PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-732472,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578304542594,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-11-28T00:49:37.793691239Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-732472,Uid:f3e287dac636cd18fa651d2219ad4ea9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578254103069,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8
s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f3e287dac636cd18fa651d2219ad4ea9,kubernetes.io/config.seen: 2023-11-28T00:49:37.793681603Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-732472,Uid:a4b9e9d536b8786f0dbde3fec6faabba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701132578249064316,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a4b9e9d536b8786f0dbde3fec6faabba,kubernetes.io/config.seen: 2023-11-28T00:49:37.793688315Z
,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=c9c8763e-32ad-4323-84bc-a79e13d042b6 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.807801997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bc95b0f3-f1e9-4a7a-a36f-84ba2bb1b083 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.808049659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bc95b0f3-f1e9-4a7a-a36f-84ba2bb1b083 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 01:03:21 old-k8s-version-732472 crio[710]: time="2023-11-28 01:03:21.809989245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329,PodSandboxId:c499bf98989ccbb095beca531911e4a93230b5416b2b9877974699d543cc99d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701132607578901981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f880d43-3a6e-4eed-8f26-1a1ca9bdc60e,},Annotations:map[string]string{io.kubernetes.container.hash: 489ab746,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506,PodSandboxId:c72b8f101411b171ec883d309a14daa4fb6afe576630c197058041ed5e01cbc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701132606650688747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-88chq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 273e27bd-a4a8-4fa9-913a-a67ee5a80990,},Annotations:map[string]string{io.kubernetes.container.hash: 211ab8e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2,PodSandboxId:ef320b5118d97e38314b7e1ac09ff023b1e7920f3a9131622891cc71b43bef32,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132606085911598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-fsfpw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a466ce19-debe-424d-9eec-00513557472b,},Annotations:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e,PodSandboxId:ce0022ee5b25eb4f4c61fc2700352eec46f2699cf1b9055299e80f3ab938dc5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701132605955777949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-5s84s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4388650c-3956-44bf-86ea-6b64743166ca,},Annotat
ions:map[string]string{io.kubernetes.container.hash: a9b094fa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd,PodSandboxId:c586e2dd67a2c1fa5b172b80c95f0effa1b094c48cdc9dfc3053561b7b0518a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701132580435673670,Labels:map[string]string{io.kubernetes.cont
ainer.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3e287dac636cd18fa651d2219ad4ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 71b20b40,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d,PodSandboxId:a0326b82d62aaa1b9f93226cad826eeae2902d5435d34f6d1addd8e93b1f91ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701132579102946292,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022,PodSandboxId:b81167430dd7c78b616968919912aa2b125b8c8dc621183c4014c5f349a6faeb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701132579004863537,Labels:map[string]string{io.kubernetes.container.
name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201,PodSandboxId:dd2bae0b694c320c2f59a1392dae169ea617fee2aa56f64e654f280bb65bce16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701132578706297173,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-732472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4b9e9d536b8786f0dbde3fec6faabba,},Annotations:map[string]string{io.kubernetes.container.hash: 3a6565be,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bc95b0f3-f1e9-4a7a-a36f-84ba2bb1b083 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d3d9279a66ef5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   c499bf98989cc       storage-provisioner
	9b1825f1d0c82       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   c72b8f101411b       kube-proxy-88chq
	a2177833c1771       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   ef320b5118d97       coredns-5644d7b6d9-fsfpw
	b60c716bf8e7d       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   ce0022ee5b25e       coredns-5644d7b6d9-5s84s
	b9e1c4fc0eff6       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   c586e2dd67a2c       etcd-old-k8s-version-732472
	1823a59d40c07       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   a0326b82d62aa       kube-controller-manager-old-k8s-version-732472
	a8ef985c9c8af       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   b81167430dd7c       kube-scheduler-old-k8s-version-732472
	f0bcb5d5d5a7f       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            0                   dd2bae0b694c3       kube-apiserver-old-k8s-version-732472
	
	* 
	* ==> coredns [a2177833c17712485813aed4f24f679d723a5475f4c1e4c3ee7a31460d51f7e2] <==
	* .:53
	2023-11-28T00:50:06.431Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-28T00:50:06.431Z [INFO] CoreDNS-1.6.2
	2023-11-28T00:50:06.432Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-28T00:50:38.360Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	
	* 
	* ==> coredns [b60c716bf8e7dd58475aa5c4adaa00e92d364219f163e09118ce73e14c7c817e] <==
	* .:53
	2023-11-28T00:50:06.394Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-28T00:50:06.394Z [INFO] CoreDNS-1.6.2
	2023-11-28T00:50:06.394Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-28T00:50:36.221Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-732472
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-732472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=old-k8s-version-732472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T00_49_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 00:49:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 01:02:45 +0000   Tue, 28 Nov 2023 00:49:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 01:02:45 +0000   Tue, 28 Nov 2023 00:49:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 01:02:45 +0000   Tue, 28 Nov 2023 00:49:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 01:02:45 +0000   Tue, 28 Nov 2023 00:49:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    old-k8s-version-732472
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 1581785acd1f4bd3a339cf98671c531d
	 System UUID:                1581785a-cd1f-4bd3-a339-cf98671c531d
	 Boot ID:                    4b090cb9-312f-4acd-958f-f6e962927841
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-5s84s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                coredns-5644d7b6d9-fsfpw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-732472                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-732472             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-732472    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-88chq                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-732472             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-nd9qp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             340Mi (16%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-732472     Node old-k8s-version-732472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-732472     Node old-k8s-version-732472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-732472     Node old-k8s-version-732472 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-732472  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov28 00:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.087466] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.583383] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472544] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147326] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.574552] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.935361] systemd-fstab-generator[634]: Ignoring "noauto" for root device
	[  +0.165813] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.172772] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.132870] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.234892] systemd-fstab-generator[693]: Ignoring "noauto" for root device
	[ +19.783735] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
	[  +0.438464] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +15.991683] kauditd_printk_skb: 3 callbacks suppressed
	[Nov28 00:45] kauditd_printk_skb: 2 callbacks suppressed
	[Nov28 00:49] systemd-fstab-generator[3136]: Ignoring "noauto" for root device
	[  +1.432963] kauditd_printk_skb: 6 callbacks suppressed
	[Nov28 00:50] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [b9e1c4fc0eff65b28c87e522b0d6bc9d46c17aa1d9752b63a9b673ac567a03cd] <==
	* 2023-11-28 00:49:40.584582 W | auth: simple token is not cryptographically signed
	2023-11-28 00:49:40.589597 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-28 00:49:40.590813 I | etcdserver: bbf1bb039b0d3451 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-28 00:49:40.591130 I | etcdserver/membership: added member bbf1bb039b0d3451 [https://192.168.39.172:2380] to cluster a5f5c7bb54d744d4
	2023-11-28 00:49:40.592893 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-28 00:49:40.593298 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-28 00:49:40.593451 I | embed: listening for metrics on http://192.168.39.172:2381
	2023-11-28 00:49:41.376444 I | raft: bbf1bb039b0d3451 is starting a new election at term 1
	2023-11-28 00:49:41.376573 I | raft: bbf1bb039b0d3451 became candidate at term 2
	2023-11-28 00:49:41.376587 I | raft: bbf1bb039b0d3451 received MsgVoteResp from bbf1bb039b0d3451 at term 2
	2023-11-28 00:49:41.376597 I | raft: bbf1bb039b0d3451 became leader at term 2
	2023-11-28 00:49:41.376602 I | raft: raft.node: bbf1bb039b0d3451 elected leader bbf1bb039b0d3451 at term 2
	2023-11-28 00:49:41.377075 I | etcdserver: published {Name:old-k8s-version-732472 ClientURLs:[https://192.168.39.172:2379]} to cluster a5f5c7bb54d744d4
	2023-11-28 00:49:41.377136 I | embed: ready to serve client requests
	2023-11-28 00:49:41.378042 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-28 00:49:41.378756 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-28 00:49:41.378837 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-28 00:49:41.379301 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-28 00:49:41.389781 I | embed: ready to serve client requests
	2023-11-28 00:49:41.394100 I | embed: serving client requests on 192.168.39.172:2379
	2023-11-28 00:50:06.205833 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-732472\" " with result "range_response_count:1 size:4370" took too long (204.132171ms) to execute
	2023-11-28 00:50:06.206240 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (141.002534ms) to execute
	2023-11-28 00:50:06.222049 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:466" took too long (162.26639ms) to execute
	2023-11-28 00:59:41.412308 I | mvcc: store.index: compact 661
	2023-11-28 00:59:41.414935 I | mvcc: finished scheduled compaction at 661 (took 2.139467ms)
	
	* 
	* ==> kernel <==
	*  01:03:22 up 19 min,  0 users,  load average: 0.73, 0.33, 0.25
	Linux old-k8s-version-732472 5.10.57 #1 SMP Mon Nov 27 21:58:27 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [f0bcb5d5d5a7f5d15294b93974a89442f39c6d5b72f90c1bd8455ef232d30201] <==
	* I1128 00:55:45.606245       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 00:55:45.606418       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 00:55:45.606463       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:55:45.606474       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:57:45.606845       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 00:57:45.606955       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 00:57:45.607036       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:57:45.607048       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 00:59:45.608936       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 00:59:45.609052       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 00:59:45.609119       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 00:59:45.609126       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 01:00:45.609762       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 01:00:45.609865       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 01:00:45.609917       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 01:00:45.609924       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 01:02:45.616490       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 01:02:45.616619       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 01:02:45.616685       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 01:02:45.616692       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1823a59d40c071836f4fd7cfa2e29752ad7e0c7464b7c9cda8a536c91acacd0d] <==
	* W1128 00:57:00.667291       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:57:08.001542       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:57:32.669468       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:57:38.253932       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:58:04.671619       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:58:08.506016       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:58:36.673763       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:58:38.757903       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:59:08.676094       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 00:59:09.009926       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1128 00:59:39.261812       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 00:59:40.677985       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 01:00:09.513833       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 01:00:12.680326       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 01:00:39.766018       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 01:00:44.682691       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 01:01:10.018837       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 01:01:16.685179       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 01:01:40.271215       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 01:01:48.687258       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 01:02:10.523077       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 01:02:20.689517       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 01:02:40.775111       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 01:02:52.691589       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 01:03:11.027313       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [9b1825f1d0c823d6402042029f120e8bf1bc20e1e5c148ea46b80e076d2ce506] <==
	* W1128 00:50:07.273928       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1128 00:50:07.290292       1 node.go:135] Successfully retrieved node IP: 192.168.39.172
	I1128 00:50:07.290328       1 server_others.go:149] Using iptables Proxier.
	I1128 00:50:07.292144       1 server.go:529] Version: v1.16.0
	I1128 00:50:07.299044       1 config.go:131] Starting endpoints config controller
	I1128 00:50:07.304229       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1128 00:50:07.307611       1 config.go:313] Starting service config controller
	I1128 00:50:07.307652       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1128 00:50:07.404590       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1128 00:50:07.409694       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a8ef985c9c8afad330ec7fb85589854860148ded23563c2ed76681bf82c48022] <==
	* W1128 00:49:44.638960       1 authentication.go:79] Authentication is disabled
	I1128 00:49:44.639002       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1128 00:49:44.639443       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1128 00:49:44.686124       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 00:49:44.694132       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 00:49:44.698758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 00:49:44.698859       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 00:49:44.698953       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:49:44.701638       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:49:44.701729       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:49:44.701818       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 00:49:44.701893       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 00:49:44.701969       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 00:49:44.702660       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 00:49:45.693266       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 00:49:45.695090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 00:49:45.700609       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 00:49:45.704851       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 00:49:45.705881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 00:49:45.707337       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 00:49:45.709837       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 00:49:45.711477       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 00:49:45.714162       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 00:49:45.715215       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 00:49:45.716216       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 00:44:09 UTC, ends at Tue 2023-11-28 01:03:22 UTC. --
	Nov 28 00:58:51 old-k8s-version-732472 kubelet[3155]: E1128 00:58:51.800135    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:59:05 old-k8s-version-732472 kubelet[3155]: E1128 00:59:05.800519    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:59:18 old-k8s-version-732472 kubelet[3155]: E1128 00:59:18.800962    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:59:32 old-k8s-version-732472 kubelet[3155]: E1128 00:59:32.800244    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 00:59:37 old-k8s-version-732472 kubelet[3155]: E1128 00:59:37.872047    3155 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 28 00:59:46 old-k8s-version-732472 kubelet[3155]: E1128 00:59:46.800680    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:00:00 old-k8s-version-732472 kubelet[3155]: E1128 01:00:00.805038    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:00:14 old-k8s-version-732472 kubelet[3155]: E1128 01:00:14.800095    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:00:29 old-k8s-version-732472 kubelet[3155]: E1128 01:00:29.800292    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:00:40 old-k8s-version-732472 kubelet[3155]: E1128 01:00:40.799929    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:00:54 old-k8s-version-732472 kubelet[3155]: E1128 01:00:54.872291    3155 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 01:00:54 old-k8s-version-732472 kubelet[3155]: E1128 01:00:54.872461    3155 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 01:00:54 old-k8s-version-732472 kubelet[3155]: E1128 01:00:54.872527    3155 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 01:00:54 old-k8s-version-732472 kubelet[3155]: E1128 01:00:54.872569    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 28 01:01:07 old-k8s-version-732472 kubelet[3155]: E1128 01:01:07.802838    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:01:20 old-k8s-version-732472 kubelet[3155]: E1128 01:01:20.800303    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:01:32 old-k8s-version-732472 kubelet[3155]: E1128 01:01:32.800144    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:01:45 old-k8s-version-732472 kubelet[3155]: E1128 01:01:45.800305    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:01:58 old-k8s-version-732472 kubelet[3155]: E1128 01:01:58.800456    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:02:11 old-k8s-version-732472 kubelet[3155]: E1128 01:02:11.800150    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:02:26 old-k8s-version-732472 kubelet[3155]: E1128 01:02:26.800272    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:02:39 old-k8s-version-732472 kubelet[3155]: E1128 01:02:39.800461    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:02:53 old-k8s-version-732472 kubelet[3155]: E1128 01:02:53.804818    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:03:08 old-k8s-version-732472 kubelet[3155]: E1128 01:03:08.800724    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 01:03:20 old-k8s-version-732472 kubelet[3155]: E1128 01:03:20.800940    3155 pod_workers.go:191] Error syncing pod de534eb9-4a5c-400d-ba7c-da4bc1bef670 ("metrics-server-74d5856cc6-nd9qp_kube-system(de534eb9-4a5c-400d-ba7c-da4bc1bef670)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [d3d9279a66ef5ba4ae3596d4bf3fb92de987a6e5d2eb6c74aa82ca7cd363f329] <==
	* I1128 00:50:07.720180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 00:50:07.732723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 00:50:07.732787       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 00:50:07.742963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 00:50:07.743181       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-732472_21f89187-c378-43ba-acbe-0c31444d4fd8!
	I1128 00:50:07.744616       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd0a0a8e-b3f1-4694-90e5-0d6d2344bc64", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-732472_21f89187-c378-43ba-acbe-0c31444d4fd8 became leader
	I1128 00:50:07.843538       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-732472_21f89187-c378-43ba-acbe-0c31444d4fd8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-732472 -n old-k8s-version-732472
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-732472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-nd9qp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-732472 describe pod metrics-server-74d5856cc6-nd9qp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-732472 describe pod metrics-server-74d5856cc6-nd9qp: exit status 1 (66.8949ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-nd9qp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-732472 describe pod metrics-server-74d5856cc6-nd9qp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (182.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (140.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-517109 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-517109 --alsologtostderr -v=3: exit status 82 (2m1.727991056s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-517109"  ...
	* Stopping node "newest-cni-517109"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 01:04:30.485193   51596 out.go:296] Setting OutFile to fd 1 ...
	I1128 01:04:30.485489   51596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 01:04:30.485501   51596 out.go:309] Setting ErrFile to fd 2...
	I1128 01:04:30.485508   51596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 01:04:30.485704   51596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 01:04:30.485963   51596 out.go:303] Setting JSON to false
	I1128 01:04:30.486066   51596 mustload.go:65] Loading cluster: newest-cni-517109
	I1128 01:04:30.486428   51596 config.go:182] Loaded profile config "newest-cni-517109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 01:04:30.486513   51596 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/newest-cni-517109/config.json ...
	I1128 01:04:30.486693   51596 mustload.go:65] Loading cluster: newest-cni-517109
	I1128 01:04:30.486834   51596 config.go:182] Loaded profile config "newest-cni-517109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 01:04:30.486878   51596 stop.go:39] StopHost: newest-cni-517109
	I1128 01:04:30.487282   51596 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:04:30.487340   51596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:04:30.502958   51596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I1128 01:04:30.503520   51596 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:04:30.504206   51596 main.go:141] libmachine: Using API Version  1
	I1128 01:04:30.504238   51596 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:04:30.504704   51596 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:04:30.506789   51596 out.go:177] * Stopping node "newest-cni-517109"  ...
	I1128 01:04:30.508792   51596 main.go:141] libmachine: Stopping "newest-cni-517109"...
	I1128 01:04:30.508810   51596 main.go:141] libmachine: (newest-cni-517109) Calling .GetState
	I1128 01:04:30.510437   51596 main.go:141] libmachine: (newest-cni-517109) Calling .Stop
	I1128 01:04:30.513629   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 0/60
	I1128 01:04:31.515764   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 1/60
	I1128 01:04:32.517757   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 2/60
	I1128 01:04:33.519465   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 3/60
	I1128 01:04:34.520780   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 4/60
	I1128 01:04:35.522891   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 5/60
	I1128 01:04:36.525095   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 6/60
	I1128 01:04:37.527530   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 7/60
	I1128 01:04:38.529131   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 8/60
	I1128 01:04:39.531119   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 9/60
	I1128 01:04:40.533257   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 10/60
	I1128 01:04:41.535133   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 11/60
	I1128 01:04:42.536545   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 12/60
	I1128 01:04:43.537689   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 13/60
	I1128 01:04:44.538930   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 14/60
	I1128 01:04:45.540921   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 15/60
	I1128 01:04:46.542290   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 16/60
	I1128 01:04:47.544215   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 17/60
	I1128 01:04:48.545477   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 18/60
	I1128 01:04:49.546802   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 19/60
	I1128 01:04:50.548694   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 20/60
	I1128 01:04:51.549813   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 21/60
	I1128 01:04:52.551183   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 22/60
	I1128 01:04:53.553730   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 23/60
	I1128 01:04:54.555606   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 24/60
	I1128 01:04:55.557526   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 25/60
	I1128 01:04:56.558807   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 26/60
	I1128 01:04:57.560613   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 27/60
	I1128 01:04:58.561860   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 28/60
	I1128 01:04:59.563169   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 29/60
	I1128 01:05:00.564774   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 30/60
	I1128 01:05:01.566382   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 31/60
	I1128 01:05:02.567786   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 32/60
	I1128 01:05:03.569747   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 33/60
	I1128 01:05:04.571272   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 34/60
	I1128 01:05:05.572944   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 35/60
	I1128 01:05:06.575267   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 36/60
	I1128 01:05:07.576745   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 37/60
	I1128 01:05:08.578123   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 38/60
	I1128 01:05:09.579666   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 39/60
	I1128 01:05:10.582114   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 40/60
	I1128 01:05:11.584074   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 41/60
	I1128 01:05:12.585323   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 42/60
	I1128 01:05:13.587339   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 43/60
	I1128 01:05:14.588696   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 44/60
	I1128 01:05:15.590004   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 45/60
	I1128 01:05:16.591573   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 46/60
	I1128 01:05:17.593216   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 47/60
	I1128 01:05:18.595357   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 48/60
	I1128 01:05:19.597250   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 49/60
	I1128 01:05:20.598780   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 50/60
	I1128 01:05:21.600473   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 51/60
	I1128 01:05:22.601987   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 52/60
	I1128 01:05:23.603512   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 53/60
	I1128 01:05:24.606336   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 54/60
	I1128 01:05:25.608132   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 55/60
	I1128 01:05:26.609489   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 56/60
	I1128 01:05:27.611412   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 57/60
	I1128 01:05:28.612722   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 58/60
	I1128 01:05:29.614275   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 59/60
	I1128 01:05:30.615544   51596 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 01:05:30.615599   51596 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 01:05:30.615618   51596 retry.go:31] will retry after 1.138759635s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 01:05:31.754471   51596 stop.go:39] StopHost: newest-cni-517109
	I1128 01:05:31.754987   51596 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17206-4749/.minikube/bin/docker-machine-driver-kvm2
	I1128 01:05:31.755051   51596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 01:05:31.770458   51596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
	I1128 01:05:31.770987   51596 main.go:141] libmachine: () Calling .GetVersion
	I1128 01:05:31.771470   51596 main.go:141] libmachine: Using API Version  1
	I1128 01:05:31.771493   51596 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 01:05:31.771901   51596 main.go:141] libmachine: () Calling .GetMachineName
	I1128 01:05:31.774147   51596 out.go:177] * Stopping node "newest-cni-517109"  ...
	I1128 01:05:31.775595   51596 main.go:141] libmachine: Stopping "newest-cni-517109"...
	I1128 01:05:31.775614   51596 main.go:141] libmachine: (newest-cni-517109) Calling .GetState
	I1128 01:05:31.777542   51596 main.go:141] libmachine: (newest-cni-517109) Calling .Stop
	I1128 01:05:31.781367   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 0/60
	I1128 01:05:32.783307   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 1/60
	I1128 01:05:33.785649   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 2/60
	I1128 01:05:34.788166   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 3/60
	I1128 01:05:35.789983   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 4/60
	I1128 01:05:36.791754   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 5/60
	I1128 01:05:37.793321   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 6/60
	I1128 01:05:38.795559   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 7/60
	I1128 01:05:39.797516   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 8/60
	I1128 01:05:40.798818   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 9/60
	I1128 01:05:41.800526   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 10/60
	I1128 01:05:42.802172   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 11/60
	I1128 01:05:43.803534   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 12/60
	I1128 01:05:44.804774   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 13/60
	I1128 01:05:45.806453   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 14/60
	I1128 01:05:46.808838   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 15/60
	I1128 01:05:47.810052   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 16/60
	I1128 01:05:48.812445   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 17/60
	I1128 01:05:49.814483   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 18/60
	I1128 01:05:50.815413   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 19/60
	I1128 01:05:51.817162   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 20/60
	I1128 01:05:52.819723   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 21/60
	I1128 01:05:53.822554   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 22/60
	I1128 01:05:54.824330   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 23/60
	I1128 01:05:55.826642   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 24/60
	I1128 01:05:56.828667   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 25/60
	I1128 01:05:57.829978   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 26/60
	I1128 01:05:58.831263   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 27/60
	I1128 01:05:59.832375   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 28/60
	I1128 01:06:00.833992   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 29/60
	I1128 01:06:01.836040   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 30/60
	I1128 01:06:02.837480   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 31/60
	I1128 01:06:03.839279   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 32/60
	I1128 01:06:05.085095   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 33/60
	I1128 01:06:06.086642   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 34/60
	I1128 01:06:07.089042   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 35/60
	I1128 01:06:08.091520   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 36/60
	I1128 01:06:09.093095   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 37/60
	I1128 01:06:10.095283   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 38/60
	I1128 01:06:11.096890   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 39/60
	I1128 01:06:12.098552   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 40/60
	I1128 01:06:13.100118   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 41/60
	I1128 01:06:14.102364   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 42/60
	I1128 01:06:15.103658   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 43/60
	I1128 01:06:16.105281   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 44/60
	I1128 01:06:17.106684   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 45/60
	I1128 01:06:18.108371   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 46/60
	I1128 01:06:19.109891   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 47/60
	I1128 01:06:20.111446   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 48/60
	I1128 01:06:21.112926   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 49/60
	I1128 01:06:22.115369   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 50/60
	I1128 01:06:23.117252   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 51/60
	I1128 01:06:24.119356   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 52/60
	I1128 01:06:25.120903   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 53/60
	I1128 01:06:26.123375   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 54/60
	I1128 01:06:27.125087   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 55/60
	I1128 01:06:28.126410   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 56/60
	I1128 01:06:29.127754   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 57/60
	I1128 01:06:30.129354   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 58/60
	I1128 01:06:31.131460   51596 main.go:141] libmachine: (newest-cni-517109) Waiting for machine to stop 59/60
	I1128 01:06:32.132629   51596 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 01:06:32.132679   51596 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 01:06:32.135055   51596 out.go:177] 
	W1128 01:06:32.136609   51596 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 01:06:32.136629   51596 out.go:239] * 
	* 
	W1128 01:06:32.139999   51596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 01:06:32.141724   51596 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-517109 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-517109 -n newest-cni-517109
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-517109 -n newest-cni-517109: exit status 3 (18.714919103s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 01:06:50.861161   53860 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E1128 01:06:50.861181   53860 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-517109" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (140.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-517109 -n newest-cni-517109
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-517109 -n newest-cni-517109: exit status 3 (3.197534123s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 01:06:54.057222   53937 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E1128 01:06:54.057247   53937 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-517109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-517109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155368956s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-517109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-517109 -n newest-cni-517109
E1128 01:07:00.581164   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-517109 -n newest-cni-517109: exit status 3 (3.060244052s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 01:07:03.273062   54338 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E1128 01:07:03.273082   54338 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-517109" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.41s)

                                                
                                    

Test pass (236/303)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 51.75
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 16.32
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.0/json-events 43.58
18 TestDownloadOnly/v1.29.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.57
27 TestOffline 104.79
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 216.3
34 TestAddons/parallel/Registry 25.49
36 TestAddons/parallel/InspektorGadget 11.55
37 TestAddons/parallel/MetricsServer 6.26
38 TestAddons/parallel/HelmTiller 12.52
40 TestAddons/parallel/CSI 46.38
41 TestAddons/parallel/Headlamp 17.52
42 TestAddons/parallel/CloudSpanner 5.75
43 TestAddons/parallel/LocalPath 18.49
44 TestAddons/parallel/NvidiaDevicePlugin 5.9
47 TestAddons/serial/GCPAuth/Namespaces 0.11
49 TestCertOptions 56.11
50 TestCertExpiration 286.08
52 TestForceSystemdFlag 108.15
53 TestForceSystemdEnv 76.23
55 TestKVMDriverInstallOrUpdate 5.49
59 TestErrorSpam/setup 46.24
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.79
62 TestErrorSpam/pause 1.59
63 TestErrorSpam/unpause 1.72
64 TestErrorSpam/stop 2.26
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 63.3
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 38.09
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
76 TestFunctional/serial/CacheCmd/cache/add_local 2.22
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 289.79
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.14
87 TestFunctional/serial/LogsFileCmd 1.14
88 TestFunctional/serial/InvalidService 4.09
90 TestFunctional/parallel/ConfigCmd 0.43
91 TestFunctional/parallel/DashboardCmd 31.44
92 TestFunctional/parallel/DryRun 0.29
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 1.06
98 TestFunctional/parallel/ServiceCmdConnect 12.72
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 46.5
102 TestFunctional/parallel/SSHCmd 0.5
103 TestFunctional/parallel/CpCmd 1.04
104 TestFunctional/parallel/MySQL 28.23
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.77
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
114 TestFunctional/parallel/License 0.61
115 TestFunctional/parallel/ServiceCmd/DeployApp 12.24
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 0.75
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
131 TestFunctional/parallel/ImageCommands/ImageBuild 7.17
132 TestFunctional/parallel/ImageCommands/Setup 2.21
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.83
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.86
136 TestFunctional/parallel/ServiceCmd/List 0.44
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
139 TestFunctional/parallel/ServiceCmd/Format 0.45
140 TestFunctional/parallel/ServiceCmd/URL 0.44
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
145 TestFunctional/parallel/ProfileCmd/profile_list 0.32
146 TestFunctional/parallel/MountCmd/any-port 27.01
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.39
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.39
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.93
152 TestFunctional/parallel/MountCmd/specific-port 2.06
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
154 TestFunctional/delete_addon-resizer_images 0.06
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestIngressAddonLegacy/StartLegacyK8sCluster 110.84
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.94
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
167 TestJSONOutput/start/Command 61.55
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.68
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.63
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 7.12
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.22
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 93.52
199 TestMountStart/serial/StartWithMountFirst 28.96
200 TestMountStart/serial/VerifyMountFirst 0.4
201 TestMountStart/serial/StartWithMountSecond 28.5
202 TestMountStart/serial/VerifyMountSecond 0.4
203 TestMountStart/serial/DeleteFirst 0.68
204 TestMountStart/serial/VerifyMountPostDelete 0.4
205 TestMountStart/serial/Stop 1.18
206 TestMountStart/serial/RestartStopped 24.19
207 TestMountStart/serial/VerifyMountPostStop 0.41
210 TestMultiNode/serial/FreshStart2Nodes 109.71
211 TestMultiNode/serial/DeployApp2Nodes 5.38
213 TestMultiNode/serial/AddNode 45.73
214 TestMultiNode/serial/ProfileList 0.23
215 TestMultiNode/serial/CopyFile 7.72
216 TestMultiNode/serial/StopNode 2.27
217 TestMultiNode/serial/StartAfterStop 30.69
219 TestMultiNode/serial/DeleteNode 1.78
221 TestMultiNode/serial/RestartMultiNode 445.4
222 TestMultiNode/serial/ValidateNameConflict 48.99
229 TestScheduledStopUnix 116.79
235 TestKubernetesUpgrade 192.53
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
239 TestNoKubernetes/serial/StartWithK8s 105.43
247 TestNoKubernetes/serial/StartWithStopK8s 63.41
248 TestNoKubernetes/serial/Start 28.88
256 TestNetworkPlugins/group/false 4.49
261 TestPause/serial/Start 112.92
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
263 TestNoKubernetes/serial/ProfileList 0.41
264 TestNoKubernetes/serial/Stop 1.17
266 TestPause/serial/SecondStartNoReconfiguration 42.4
267 TestStoppedBinaryUpgrade/Setup 2.32
269 TestPause/serial/Pause 1.5
270 TestPause/serial/VerifyStatus 0.29
271 TestPause/serial/Unpause 0.78
272 TestPause/serial/PauseAgain 1.4
273 TestPause/serial/DeletePaused 1.05
274 TestPause/serial/VerifyDeletedResources 5.87
276 TestStartStop/group/old-k8s-version/serial/FirstStart 132.37
278 TestStartStop/group/no-preload/serial/FirstStart 208.57
280 TestStartStop/group/embed-certs/serial/FirstStart 62.37
281 TestStartStop/group/old-k8s-version/serial/DeployApp 11.51
282 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.84
284 TestStartStop/group/embed-certs/serial/DeployApp 10.45
285 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.4
288 TestStartStop/group/no-preload/serial/DeployApp 10.93
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.19
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.39
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
297 TestStartStop/group/old-k8s-version/serial/SecondStart 779.96
299 TestStartStop/group/embed-certs/serial/SecondStart 570.67
301 TestStartStop/group/no-preload/serial/SecondStart 576.98
303 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 489.09
313 TestStartStop/group/newest-cni/serial/FirstStart 64.66
314 TestNetworkPlugins/group/auto/Start 126.2
315 TestStartStop/group/newest-cni/serial/DeployApp 0
316 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.24
318 TestNetworkPlugins/group/calico/Start 95.25
319 TestNetworkPlugins/group/auto/KubeletFlags 0.24
320 TestNetworkPlugins/group/auto/NetCatPod 11.44
321 TestNetworkPlugins/group/auto/DNS 0.18
322 TestNetworkPlugins/group/auto/Localhost 0.14
323 TestNetworkPlugins/group/auto/HairPin 0.14
324 TestNetworkPlugins/group/custom-flannel/Start 90.89
326 TestNetworkPlugins/group/kindnet/Start 71.3
327 TestNetworkPlugins/group/calico/ControllerPod 5.03
328 TestStartStop/group/newest-cni/serial/SecondStart 431.37
329 TestNetworkPlugins/group/calico/KubeletFlags 0.24
330 TestNetworkPlugins/group/calico/NetCatPod 13.48
331 TestNetworkPlugins/group/calico/DNS 0.18
332 TestNetworkPlugins/group/calico/Localhost 0.15
333 TestNetworkPlugins/group/calico/HairPin 0.17
334 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
335 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.43
336 TestNetworkPlugins/group/flannel/Start 351.79
337 TestNetworkPlugins/group/custom-flannel/DNS 0.19
338 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
339 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
340 TestNetworkPlugins/group/enable-default-cni/Start 327.04
341 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
343 TestNetworkPlugins/group/kindnet/NetCatPod 12.35
344 TestNetworkPlugins/group/kindnet/DNS 0.18
345 TestNetworkPlugins/group/kindnet/Localhost 0.16
346 TestNetworkPlugins/group/kindnet/HairPin 0.14
347 TestNetworkPlugins/group/bridge/Start 357.44
348 TestNetworkPlugins/group/flannel/ControllerPod 5.06
349 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
350 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.45
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
352 TestNetworkPlugins/group/flannel/NetCatPod 12.57
353 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
354 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
355 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
356 TestNetworkPlugins/group/flannel/DNS 0.2
357 TestNetworkPlugins/group/flannel/Localhost 0.17
358 TestNetworkPlugins/group/flannel/HairPin 0.2
359 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
360 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
361 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
362 TestStartStop/group/newest-cni/serial/Pause 2.57
363 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
364 TestNetworkPlugins/group/bridge/NetCatPod 10.36
365 TestNetworkPlugins/group/bridge/DNS 0.17
366 TestNetworkPlugins/group/bridge/Localhost 0.14
367 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (51.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480485 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480485 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (51.752085531s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (51.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480485
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480485: exit status 85 (77.322516ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-480485        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:24:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:24:58.538246   11942 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:24:58.538385   11942 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:58.538395   11942 out.go:309] Setting ErrFile to fd 2...
	I1127 23:24:58.538400   11942 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:58.538606   11942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	W1127 23:24:58.538754   11942 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-4749/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-4749/.minikube/config/config.json: no such file or directory
	I1127 23:24:58.539377   11942 out.go:303] Setting JSON to true
	I1127 23:24:58.540195   11942 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":446,"bootTime":1701127053,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:24:58.540253   11942 start.go:138] virtualization: kvm guest
	I1127 23:24:58.542689   11942 out.go:97] [download-only-480485] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:24:58.544288   11942 out.go:169] MINIKUBE_LOCATION=17206
	W1127 23:24:58.542819   11942 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball: no such file or directory
	I1127 23:24:58.542897   11942 notify.go:220] Checking for updates...
	I1127 23:24:58.547151   11942 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:24:58.548600   11942 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:24:58.549976   11942 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:24:58.551349   11942 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 23:24:58.553809   11942 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:24:58.554023   11942 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:24:58.729616   11942 out.go:97] Using the kvm2 driver based on user configuration
	I1127 23:24:58.729659   11942 start.go:298] selected driver: kvm2
	I1127 23:24:58.729665   11942 start.go:902] validating driver "kvm2" against <nil>
	I1127 23:24:58.730032   11942 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:24:58.730168   11942 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 23:24:58.743714   11942 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 23:24:58.743799   11942 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:24:58.744289   11942 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1127 23:24:58.744448   11942 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1127 23:24:58.744505   11942 cni.go:84] Creating CNI manager for ""
	I1127 23:24:58.744520   11942 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:24:58.744532   11942 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1127 23:24:58.744538   11942 start_flags.go:323] config:
	{Name:download-only-480485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-480485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:24:58.744737   11942 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:24:58.746766   11942 out.go:97] Downloading VM boot image ...
	I1127 23:24:58.746807   11942 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/iso/amd64/minikube-v1.32.1-1701107474-17206-amd64.iso
	I1127 23:25:08.130265   11942 out.go:97] Starting control plane node download-only-480485 in cluster download-only-480485
	I1127 23:25:08.130299   11942 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 23:25:08.239985   11942 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1127 23:25:08.240018   11942 cache.go:56] Caching tarball of preloaded images
	I1127 23:25:08.240162   11942 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 23:25:08.242042   11942 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1127 23:25:08.242055   11942 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:25:08.359595   11942 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1127 23:25:22.812237   11942 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:25:22.812325   11942 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:25:23.710113   11942 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1127 23:25:23.710468   11942 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/download-only-480485/config.json ...
	I1127 23:25:23.710502   11942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/download-only-480485/config.json: {Name:mk58401be162344dd504587796081ebba45fd3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:23.710664   11942 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1127 23:25:23.710862   11942 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-480485"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (16.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480485 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480485 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.317899558s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (16.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480485
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480485: exit status 85 (71.191655ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-480485        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |          |
	|         | -p download-only-480485        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:25:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:25:50.364995   12089 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:25:50.365267   12089 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:50.365278   12089 out.go:309] Setting ErrFile to fd 2...
	I1127 23:25:50.365285   12089 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:50.365474   12089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	W1127 23:25:50.365602   12089 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-4749/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-4749/.minikube/config/config.json: no such file or directory
	I1127 23:25:50.366045   12089 out.go:303] Setting JSON to true
	I1127 23:25:50.366865   12089 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":497,"bootTime":1701127053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:25:50.366927   12089 start.go:138] virtualization: kvm guest
	I1127 23:25:50.369047   12089 out.go:97] [download-only-480485] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:25:50.370637   12089 out.go:169] MINIKUBE_LOCATION=17206
	I1127 23:25:50.369254   12089 notify.go:220] Checking for updates...
	I1127 23:25:50.373595   12089 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:25:50.375201   12089 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:25:50.376654   12089 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:25:50.378005   12089 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 23:25:50.380460   12089 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:25:50.380925   12089 config.go:182] Loaded profile config "download-only-480485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1127 23:25:50.380983   12089 start.go:810] api.Load failed for download-only-480485: filestore "download-only-480485": Docker machine "download-only-480485" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:25:50.381063   12089 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 23:25:50.381093   12089 start.go:810] api.Load failed for download-only-480485: filestore "download-only-480485": Docker machine "download-only-480485" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:25:50.412948   12089 out.go:97] Using the kvm2 driver based on existing profile
	I1127 23:25:50.412971   12089 start.go:298] selected driver: kvm2
	I1127 23:25:50.412976   12089 start.go:902] validating driver "kvm2" against &{Name:download-only-480485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-480485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:25:50.413358   12089 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:25:50.413425   12089 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 23:25:50.427937   12089 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 23:25:50.428652   12089 cni.go:84] Creating CNI manager for ""
	I1127 23:25:50.428671   12089 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:25:50.428683   12089 start_flags.go:323] config:
	{Name:download-only-480485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-480485 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:25:50.428855   12089 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:25:50.430727   12089 out.go:97] Starting control plane node download-only-480485 in cluster download-only-480485
	I1127 23:25:50.430743   12089 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:25:50.936833   12089 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:25:50.936898   12089 cache.go:56] Caching tarball of preloaded images
	I1127 23:25:50.937056   12089 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1127 23:25:50.939145   12089 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1127 23:25:50.939179   12089 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:25:51.054311   12089 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1127 23:26:04.860577   12089 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:26:04.860684   12089 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-480485"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/json-events (43.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480485 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480485 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (43.577017202s)
--- PASS: TestDownloadOnly/v1.29.0-rc.0/json-events (43.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480485
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480485: exit status 85 (68.495391ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-480485           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |          |
	|         | -p download-only-480485           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-480485 | jenkins | v1.32.0 | 27 Nov 23 23:26 UTC |          |
	|         | -p download-only-480485           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:26:06
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:26:06.753612   12167 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:26:06.753845   12167 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:26:06.753853   12167 out.go:309] Setting ErrFile to fd 2...
	I1127 23:26:06.753857   12167 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:26:06.754029   12167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	W1127 23:26:06.754131   12167 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-4749/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-4749/.minikube/config/config.json: no such file or directory
	I1127 23:26:06.754540   12167 out.go:303] Setting JSON to true
	I1127 23:26:06.755316   12167 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":514,"bootTime":1701127053,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:26:06.755368   12167 start.go:138] virtualization: kvm guest
	I1127 23:26:06.757789   12167 out.go:97] [download-only-480485] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:26:06.759770   12167 out.go:169] MINIKUBE_LOCATION=17206
	I1127 23:26:06.757926   12167 notify.go:220] Checking for updates...
	I1127 23:26:06.762791   12167 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:26:06.764299   12167 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:26:06.765795   12167 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:26:06.767230   12167 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 23:26:06.769987   12167 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:26:06.770461   12167 config.go:182] Loaded profile config "download-only-480485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1127 23:26:06.770495   12167 start.go:810] api.Load failed for download-only-480485: filestore "download-only-480485": Docker machine "download-only-480485" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:26:06.770582   12167 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 23:26:06.770616   12167 start.go:810] api.Load failed for download-only-480485: filestore "download-only-480485": Docker machine "download-only-480485" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:26:06.800912   12167 out.go:97] Using the kvm2 driver based on existing profile
	I1127 23:26:06.800932   12167 start.go:298] selected driver: kvm2
	I1127 23:26:06.800936   12167 start.go:902] validating driver "kvm2" against &{Name:download-only-480485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-480485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:26:06.801295   12167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:26:06.801351   12167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17206-4749/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 23:26:06.814992   12167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 23:26:06.815722   12167 cni.go:84] Creating CNI manager for ""
	I1127 23:26:06.815744   12167 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1127 23:26:06.815755   12167 start_flags.go:323] config:
	{Name:download-only-480485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.0 ClusterName:download-only-480485 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:26:06.815889   12167 iso.go:125] acquiring lock: {Name:mkcbf4fbddcb89ef7fa17df683cb708781ecb7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:26:06.817681   12167 out.go:97] Starting control plane node download-only-480485 in cluster download-only-480485
	I1127 23:26:06.817696   12167 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1127 23:26:07.326658   12167 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I1127 23:26:07.326686   12167 cache.go:56] Caching tarball of preloaded images
	I1127 23:26:07.326836   12167 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1127 23:26:07.328732   12167 out.go:97] Downloading Kubernetes v1.29.0-rc.0 preload ...
	I1127 23:26:07.328749   12167 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:26:07.440464   12167 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5686edee2f3c2c02d5f5e95cbdafe8b5 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I1127 23:26:19.018320   12167 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:26:19.018413   12167 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-4749/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1127 23:26:19.830498   12167 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.0 on crio
	I1127 23:26:19.830629   12167 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/download-only-480485/config.json ...
	I1127 23:26:19.830820   12167 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1127 23:26:19.830999   12167 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17206-4749/.minikube/cache/linux/amd64/v1.29.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-480485"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-480485
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-551564 --alsologtostderr --binary-mirror http://127.0.0.1:35587 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-551564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-551564
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (104.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-152055 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-152055 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m43.546348201s)
helpers_test.go:175: Cleaning up "offline-crio-152055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-152055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-152055: (1.247414092s)
--- PASS: TestOffline (104.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-052905
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-052905: exit status 85 (60.837015ms)

                                                
                                                
-- stdout --
	* Profile "addons-052905" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-052905"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-052905
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-052905: exit status 85 (58.514062ms)

                                                
                                                
-- stdout --
	* Profile "addons-052905" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-052905"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (216.3s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-052905 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-052905 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m36.295827427s)
--- PASS: TestAddons/Setup (216.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (25.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 27.121923ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fw9r2" [e8da5a5e-d8d8-4c96-a74e-61eb7f679a4d] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.039781559s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dsmwb" [cae4e0b6-6db5-42bc-b440-c55e0d493d8f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.028082832s
addons_test.go:339: (dbg) Run:  kubectl --context addons-052905 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-052905 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-052905 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.564368122s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 ip
2023/11/27 23:30:52 [DEBUG] GET http://192.168.39.221:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (25.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xqwjv" [c520172e-5c6a-47e6-b0e1-1399c6dbe106] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012375047s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-052905
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-052905: (6.53558304s)
--- PASS: TestAddons/parallel/InspektorGadget (11.55s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 27.139024ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-pkfgc" [ab570363-b34e-40b9-babf-b27b0101e455] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.040159763s
addons_test.go:414: (dbg) Run:  kubectl --context addons-052905 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-052905 addons disable metrics-server --alsologtostderr -v=1: (1.109659864s)
--- PASS: TestAddons/parallel/MetricsServer (6.26s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.52s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 5.807995ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-ng6qr" [52ea092e-2863-40a3-9738-710b1a17e38a] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.018828366s
addons_test.go:472: (dbg) Run:  kubectl --context addons-052905 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-052905 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.8413666s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 27.425478ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-052905 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-052905 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e3e28755-9f03-457f-b23b-9acbac8461f1] Pending
helpers_test.go:344: "task-pv-pod" [e3e28755-9f03-457f-b23b-9acbac8461f1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e3e28755-9f03-457f-b23b-9acbac8461f1] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.024246622s
addons_test.go:583: (dbg) Run:  kubectl --context addons-052905 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-052905 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-052905 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-052905 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-052905 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-052905 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-052905 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-052905 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [23bd749b-3295-4675-a82b-f7a712ca29c3] Pending
helpers_test.go:344: "task-pv-pod-restore" [23bd749b-3295-4675-a82b-f7a712ca29c3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [23bd749b-3295-4675-a82b-f7a712ca29c3] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.028638292s
addons_test.go:625: (dbg) Run:  kubectl --context addons-052905 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-052905 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-052905 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-052905 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.956800761s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-052905 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-052905 --alsologtostderr -v=1: (1.496733143s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-l2b9s" [82ac3f4e-6823-4f96-8216-25c7378b5b94] Pending
helpers_test.go:344: "headlamp-777fd4b855-l2b9s" [82ac3f4e-6823-4f96-8216-25c7378b5b94] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-l2b9s" [82ac3f4e-6823-4f96-8216-25c7378b5b94] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.027346154s
--- PASS: TestAddons/parallel/Headlamp (17.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-qq5sd" [3ecb3188-3057-4c83-bf23-a211ce0ec815] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.020319566s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-052905
--- PASS: TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (18.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-052905 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-052905 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d6cfca42-a5a5-46b8-8b20-56783ba5ecfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d6cfca42-a5a5-46b8-8b20-56783ba5ecfc] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d6cfca42-a5a5-46b8-8b20-56783ba5ecfc] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.011904097s
addons_test.go:890: (dbg) Run:  kubectl --context addons-052905 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 ssh "cat /opt/local-path-provisioner/pvc-b65845f2-c00c-42ed-bb18-1777f72877be_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-052905 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-052905 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-052905 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (18.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-d844x" [810a535e-867e-4bfa-bc47-26b4aee7c94b] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.056766102s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-052905
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-052905 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-052905 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (56.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-188325 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-188325 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (54.468114755s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-188325 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-188325 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-188325 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-188325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-188325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-188325: (1.068035356s)
--- PASS: TestCertOptions (56.11s)

                                                
                                    
x
+
TestCertExpiration (286.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-747416 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-747416 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m26.857265502s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-747416 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-747416 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (18.223391973s)
helpers_test.go:175: Cleaning up "cert-expiration-747416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-747416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-747416: (1.002144456s)
--- PASS: TestCertExpiration (286.08s)

                                                
                                    
x
+
TestForceSystemdFlag (108.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-795261 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-795261 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m47.041236977s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-795261 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-795261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-795261
--- PASS: TestForceSystemdFlag (108.15s)

                                                
                                    
x
+
TestForceSystemdEnv (76.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-438559 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-438559 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.130340909s)
helpers_test.go:175: Cleaning up "force-systemd-env-438559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-438559
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-438559: (1.098323691s)
--- PASS: TestForceSystemdEnv (76.23s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.49s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.49s)

                                                
                                    
x
+
TestErrorSpam/setup (46.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-399674 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-399674 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-399674 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-399674 --driver=kvm2  --container-runtime=crio: (46.242356898s)
--- PASS: TestErrorSpam/setup (46.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 stop: (2.095562269s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-399674 --log_dir /tmp/nospam-399674 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17206-4749/.minikube/files/etc/test/nested/copy/11930/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-004462 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-004462 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m3.295998184s)
--- PASS: TestFunctional/serial/StartWithProxy (63.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-004462 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-004462 --alsologtostderr -v=8: (38.085426757s)
functional_test.go:659: soft start took 38.086122261s for "functional-004462" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-004462 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 cache add registry.k8s.io/pause:3.3: (1.120258055s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-004462 /tmp/TestFunctionalserialCacheCmdcacheadd_local4191729248/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 cache add minikube-local-cache-test:functional-004462
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 cache add minikube-local-cache-test:functional-004462: (1.903451625s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 cache delete minikube-local-cache-test:functional-004462
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-004462
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (237.862232ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 kubectl -- --context functional-004462 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-004462 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (289.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-004462 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1127 23:40:27.682414   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:27.688146   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:27.698410   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:27.718691   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:27.758972   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:27.839288   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:27.999721   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:28.320310   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:28.961286   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:30.241479   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:32.801697   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:37.922167   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:40:48.162766   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:41:08.642912   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:41:49.604166   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:43:11.524919   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-004462 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m49.79451816s)
functional_test.go:757: restart took 4m49.794622403s for "functional-004462" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (289.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-004462 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 logs: (1.138177s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 logs --file /tmp/TestFunctionalserialLogsFileCmd4144799535/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 logs --file /tmp/TestFunctionalserialLogsFileCmd4144799535/001/logs.txt: (1.138039396s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-004462 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-004462
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-004462: exit status 115 (318.469575ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.120:30216 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-004462 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 config get cpus: exit status 14 (74.455237ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 config get cpus: exit status 14 (57.710658ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (31.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-004462 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-004462 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20105: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (31.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-004462 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-004462 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.897432ms)

                                                
                                                
-- stdout --
	* [functional-004462] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:44:14.138971   19966 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:44:14.139095   19966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:44:14.139104   19966 out.go:309] Setting ErrFile to fd 2...
	I1127 23:44:14.139109   19966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:44:14.139302   19966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1127 23:44:14.139817   19966 out.go:303] Setting JSON to false
	I1127 23:44:14.140712   19966 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1601,"bootTime":1701127053,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:44:14.140786   19966 start.go:138] virtualization: kvm guest
	I1127 23:44:14.142882   19966 out.go:177] * [functional-004462] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 23:44:14.144282   19966 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:44:14.145583   19966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:44:14.144352   19966 notify.go:220] Checking for updates...
	I1127 23:44:14.148396   19966 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:44:14.149824   19966 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:44:14.151190   19966 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:44:14.152513   19966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:44:14.154061   19966 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:44:14.154518   19966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:44:14.154572   19966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:44:14.169950   19966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45187
	I1127 23:44:14.170415   19966 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:44:14.170963   19966 main.go:141] libmachine: Using API Version  1
	I1127 23:44:14.170995   19966 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:44:14.171359   19966 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:44:14.171580   19966 main.go:141] libmachine: (functional-004462) Calling .DriverName
	I1127 23:44:14.171830   19966 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:44:14.172100   19966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:44:14.172131   19966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:44:14.187044   19966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36277
	I1127 23:44:14.187472   19966 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:44:14.187936   19966 main.go:141] libmachine: Using API Version  1
	I1127 23:44:14.187962   19966 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:44:14.188316   19966 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:44:14.188530   19966 main.go:141] libmachine: (functional-004462) Calling .DriverName
	I1127 23:44:14.222719   19966 out.go:177] * Using the kvm2 driver based on existing profile
	I1127 23:44:14.223965   19966 start.go:298] selected driver: kvm2
	I1127 23:44:14.223982   19966 start.go:902] validating driver "kvm2" against &{Name:functional-004462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-004462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.120 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:44:14.224123   19966 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:44:14.226387   19966 out.go:177] 
	W1127 23:44:14.227880   19966 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1127 23:44:14.229379   19966 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-004462 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-004462 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-004462 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.752037ms)

                                                
                                                
-- stdout --
	* [functional-004462] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:44:14.436598   20020 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:44:14.436727   20020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:44:14.436737   20020 out.go:309] Setting ErrFile to fd 2...
	I1127 23:44:14.436742   20020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:44:14.437063   20020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1127 23:44:14.437585   20020 out.go:303] Setting JSON to false
	I1127 23:44:14.438527   20020 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1602,"bootTime":1701127053,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 23:44:14.438591   20020 start.go:138] virtualization: kvm guest
	I1127 23:44:14.440696   20020 out.go:177] * [functional-004462] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1127 23:44:14.442653   20020 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:44:14.442660   20020 notify.go:220] Checking for updates...
	I1127 23:44:14.443971   20020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:44:14.445395   20020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1127 23:44:14.446717   20020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1127 23:44:14.448064   20020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 23:44:14.449357   20020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:44:14.450969   20020 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:44:14.451380   20020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:44:14.451454   20020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:44:14.465680   20020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42435
	I1127 23:44:14.466110   20020 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:44:14.466764   20020 main.go:141] libmachine: Using API Version  1
	I1127 23:44:14.466798   20020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:44:14.467198   20020 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:44:14.467363   20020 main.go:141] libmachine: (functional-004462) Calling .DriverName
	I1127 23:44:14.467649   20020 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:44:14.468480   20020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:44:14.468528   20020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:44:14.488218   20020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
	I1127 23:44:14.488665   20020 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:44:14.489144   20020 main.go:141] libmachine: Using API Version  1
	I1127 23:44:14.489171   20020 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:44:14.489580   20020 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:44:14.489813   20020 main.go:141] libmachine: (functional-004462) Calling .DriverName
	I1127 23:44:14.524440   20020 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1127 23:44:14.525708   20020 start.go:298] selected driver: kvm2
	I1127 23:44:14.525733   20020 start.go:902] validating driver "kvm2" against &{Name:functional-004462 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-004462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.120 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:44:14.525857   20020 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:44:14.528081   20020 out.go:177] 
	W1127 23:44:14.529310   20020 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1127 23:44:14.530533   20020 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-004462 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-004462 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-s7ckc" [9393dc51-5213-4b9c-8625-af4350e3395e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-s7ckc" [9393dc51-5213-4b9c-8625-af4350e3395e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.019141937s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.120:32030
functional_test.go:1674: http://192.168.50.120:32030: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-s7ckc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.120:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.120:32030
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [50ce89f5-5f60-4d8a-b157-7c7e2528266a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013038256s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-004462 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-004462 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-004462 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-004462 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [365ffaf1-b563-4a02-b5af-bd7eb754256d] Pending
helpers_test.go:344: "sp-pod" [365ffaf1-b563-4a02-b5af-bd7eb754256d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [365ffaf1-b563-4a02-b5af-bd7eb754256d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.017116796s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-004462 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-004462 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-004462 delete -f testdata/storage-provisioner/pod.yaml: (2.896361601s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-004462 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b61d491a-d162-4967-a75c-c834a28cb347] Pending
helpers_test.go:344: "sp-pod" [b61d491a-d162-4967-a75c-c834a28cb347] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b61d491a-d162-4967-a75c-c834a28cb347] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.01970834s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-004462 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh -n functional-004462 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 cp functional-004462:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd593648444/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh -n functional-004462 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-004462 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-sb4qr" [435f99c4-4d63-4d41-9eda-468fe5896ef1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-sb4qr" [435f99c4-4d63-4d41-9eda-468fe5896ef1] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.031842139s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-004462 exec mysql-859648c796-sb4qr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-004462 exec mysql-859648c796-sb4qr -- mysql -ppassword -e "show databases;": exit status 1 (137.723576ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-004462 exec mysql-859648c796-sb4qr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11930/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo cat /etc/test/nested/copy/11930/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11930.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo cat /etc/ssl/certs/11930.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11930.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo cat /usr/share/ca-certificates/11930.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/119302.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo cat /etc/ssl/certs/119302.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/119302.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo cat /usr/share/ca-certificates/119302.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-004462 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 ssh "sudo systemctl is-active docker": exit status 1 (225.994183ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 ssh "sudo systemctl is-active containerd": exit status 1 (292.749414ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-004462 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-004462 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-vtl59" [021a5301-dc73-4ccd-950a-fae08a712b25] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-vtl59" [021a5301-dc73-4ccd-950a-fae08a712b25] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.020540504s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-004462 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-004462
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-004462
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-004462 image ls --format short --alsologtostderr:
I1127 23:44:36.407825   20625 out.go:296] Setting OutFile to fd 1 ...
I1127 23:44:36.408021   20625 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:36.408036   20625 out.go:309] Setting ErrFile to fd 2...
I1127 23:44:36.408044   20625 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:36.408374   20625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
I1127 23:44:36.409260   20625 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:36.409420   20625 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:36.410057   20625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:36.410118   20625 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:36.430262   20625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36753
I1127 23:44:36.430645   20625 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:36.431243   20625 main.go:141] libmachine: Using API Version  1
I1127 23:44:36.431267   20625 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:36.431595   20625 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:36.431779   20625 main.go:141] libmachine: (functional-004462) Calling .GetState
I1127 23:44:36.433924   20625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:36.433958   20625 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:36.447805   20625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
I1127 23:44:36.448306   20625 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:36.448815   20625 main.go:141] libmachine: Using API Version  1
I1127 23:44:36.448872   20625 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:36.449180   20625 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:36.449398   20625 main.go:141] libmachine: (functional-004462) Calling .DriverName
I1127 23:44:36.449842   20625 ssh_runner.go:195] Run: systemctl --version
I1127 23:44:36.449868   20625 main.go:141] libmachine: (functional-004462) Calling .GetSSHHostname
I1127 23:44:36.453286   20625 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:36.453763   20625 main.go:141] libmachine: (functional-004462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:e3:ab", ip: ""} in network mk-functional-004462: {Iface:virbr1 ExpiryTime:2023-11-28 00:37:21 +0000 UTC Type:0 Mac:52:54:00:82:e3:ab Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-004462 Clientid:01:52:54:00:82:e3:ab}
I1127 23:44:36.453886   20625 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined IP address 192.168.50.120 and MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:36.454031   20625 main.go:141] libmachine: (functional-004462) Calling .GetSSHPort
I1127 23:44:36.454263   20625 main.go:141] libmachine: (functional-004462) Calling .GetSSHKeyPath
I1127 23:44:36.454668   20625 main.go:141] libmachine: (functional-004462) Calling .GetSSHUsername
I1127 23:44:36.454858   20625 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/functional-004462/id_rsa Username:docker}
I1127 23:44:36.550558   20625 ssh_runner.go:195] Run: sudo crictl images --output json
I1127 23:44:36.594449   20625 main.go:141] libmachine: Making call to close driver server
I1127 23:44:36.594466   20625 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:36.594751   20625 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:36.594770   20625 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 23:44:36.594786   20625 main.go:141] libmachine: Making call to close driver server
I1127 23:44:36.594796   20625 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:36.595045   20625 main.go:141] libmachine: (functional-004462) DBG | Closing plugin on server side
I1127 23:44:36.595088   20625 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:36.595102   20625 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-004462 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-004462  | dd9e33057af0a | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-004462  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-004462 image ls --format table --alsologtostderr:
I1127 23:44:37.409126   20851 out.go:296] Setting OutFile to fd 1 ...
I1127 23:44:37.409260   20851 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:37.409268   20851 out.go:309] Setting ErrFile to fd 2...
I1127 23:44:37.409273   20851 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:37.409470   20851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
I1127 23:44:37.410028   20851 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:37.410121   20851 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:37.410514   20851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:37.410556   20851 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:37.424498   20851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34979
I1127 23:44:37.424862   20851 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:37.425356   20851 main.go:141] libmachine: Using API Version  1
I1127 23:44:37.425380   20851 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:37.425765   20851 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:37.425951   20851 main.go:141] libmachine: (functional-004462) Calling .GetState
I1127 23:44:37.427761   20851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:37.427803   20851 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:37.441949   20851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
I1127 23:44:37.442276   20851 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:37.442696   20851 main.go:141] libmachine: Using API Version  1
I1127 23:44:37.442725   20851 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:37.443010   20851 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:37.443218   20851 main.go:141] libmachine: (functional-004462) Calling .DriverName
I1127 23:44:37.443437   20851 ssh_runner.go:195] Run: systemctl --version
I1127 23:44:37.443462   20851 main.go:141] libmachine: (functional-004462) Calling .GetSSHHostname
I1127 23:44:37.446061   20851 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:37.446483   20851 main.go:141] libmachine: (functional-004462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:e3:ab", ip: ""} in network mk-functional-004462: {Iface:virbr1 ExpiryTime:2023-11-28 00:37:21 +0000 UTC Type:0 Mac:52:54:00:82:e3:ab Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-004462 Clientid:01:52:54:00:82:e3:ab}
I1127 23:44:37.446511   20851 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined IP address 192.168.50.120 and MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:37.446662   20851 main.go:141] libmachine: (functional-004462) Calling .GetSSHPort
I1127 23:44:37.446834   20851 main.go:141] libmachine: (functional-004462) Calling .GetSSHKeyPath
I1127 23:44:37.447024   20851 main.go:141] libmachine: (functional-004462) Calling .GetSSHUsername
I1127 23:44:37.447166   20851 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/functional-004462/id_rsa Username:docker}
I1127 23:44:37.543010   20851 ssh_runner.go:195] Run: sudo crictl images --output json
I1127 23:44:37.583050   20851 main.go:141] libmachine: Making call to close driver server
I1127 23:44:37.583064   20851 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:37.583343   20851 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:37.583368   20851 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 23:44:37.583394   20851 main.go:141] libmachine: Making call to close driver server
I1127 23:44:37.583408   20851 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:37.583413   20851 main.go:141] libmachine: (functional-004462) DBG | Closing plugin on server side
I1127 23:44:37.583616   20851 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:37.583634   20851 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-004462 image ls --format json --alsologtostderr:
[{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762d
a6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k
8s.io/pause:3.3"],"size":"686139"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"
97846543"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"ffd4cfbbe753e6241
9e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-004462"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43
227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"dd9e33057af0a7f67d560fbfe49bcc18d232d35a4fbd14a08719e755a3bf5939","repoDigests":["localhost/minikube-local-cache-test@sha256:f295260380741d6b7b2bfe98abed8b50c089c5b2de89ce40a1ec5c5418cc99ff"],"repoTags":["localhost/minikube-local-cache-test:functional-004462"],"size":"3345"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480c
c47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-004462 image ls --format json --alsologtostderr:
I1127 23:44:37.145010   20812 out.go:296] Setting OutFile to fd 1 ...
I1127 23:44:37.145156   20812 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:37.145168   20812 out.go:309] Setting ErrFile to fd 2...
I1127 23:44:37.145174   20812 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:37.145378   20812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
I1127 23:44:37.145931   20812 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:37.146035   20812 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:37.146431   20812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:37.146483   20812 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:37.160916   20812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
I1127 23:44:37.161358   20812 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:37.161932   20812 main.go:141] libmachine: Using API Version  1
I1127 23:44:37.161960   20812 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:37.162326   20812 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:37.162482   20812 main.go:141] libmachine: (functional-004462) Calling .GetState
I1127 23:44:37.164303   20812 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:37.164349   20812 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:37.178934   20812 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
I1127 23:44:37.179357   20812 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:37.179803   20812 main.go:141] libmachine: Using API Version  1
I1127 23:44:37.179818   20812 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:37.180077   20812 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:37.180271   20812 main.go:141] libmachine: (functional-004462) Calling .DriverName
I1127 23:44:37.180473   20812 ssh_runner.go:195] Run: systemctl --version
I1127 23:44:37.180501   20812 main.go:141] libmachine: (functional-004462) Calling .GetSSHHostname
I1127 23:44:37.183054   20812 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:37.183555   20812 main.go:141] libmachine: (functional-004462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:e3:ab", ip: ""} in network mk-functional-004462: {Iface:virbr1 ExpiryTime:2023-11-28 00:37:21 +0000 UTC Type:0 Mac:52:54:00:82:e3:ab Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-004462 Clientid:01:52:54:00:82:e3:ab}
I1127 23:44:37.183589   20812 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined IP address 192.168.50.120 and MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:37.183744   20812 main.go:141] libmachine: (functional-004462) Calling .GetSSHPort
I1127 23:44:37.183928   20812 main.go:141] libmachine: (functional-004462) Calling .GetSSHKeyPath
I1127 23:44:37.184080   20812 main.go:141] libmachine: (functional-004462) Calling .GetSSHUsername
I1127 23:44:37.184204   20812 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/functional-004462/id_rsa Username:docker}
I1127 23:44:37.288326   20812 ssh_runner.go:195] Run: sudo crictl images --output json
I1127 23:44:37.348206   20812 main.go:141] libmachine: Making call to close driver server
I1127 23:44:37.348225   20812 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:37.348501   20812 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:37.348544   20812 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 23:44:37.348563   20812 main.go:141] libmachine: Making call to close driver server
I1127 23:44:37.348576   20812 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:37.348823   20812 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:37.348840   20812 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 23:44:37.348905   20812 main.go:141] libmachine: (functional-004462) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-004462 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-004462
size: "34114467"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: dd9e33057af0a7f67d560fbfe49bcc18d232d35a4fbd14a08719e755a3bf5939
repoDigests:
- localhost/minikube-local-cache-test@sha256:f295260380741d6b7b2bfe98abed8b50c089c5b2de89ce40a1ec5c5418cc99ff
repoTags:
- localhost/minikube-local-cache-test:functional-004462
size: "3345"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-004462 image ls --format yaml --alsologtostderr:
I1127 23:44:36.665198   20685 out.go:296] Setting OutFile to fd 1 ...
I1127 23:44:36.665450   20685 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:36.665460   20685 out.go:309] Setting ErrFile to fd 2...
I1127 23:44:36.665465   20685 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:36.665652   20685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
I1127 23:44:36.666200   20685 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:36.666318   20685 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:36.666723   20685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:36.666785   20685 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:36.681327   20685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
I1127 23:44:36.681798   20685 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:36.682329   20685 main.go:141] libmachine: Using API Version  1
I1127 23:44:36.682351   20685 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:36.682821   20685 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:36.683029   20685 main.go:141] libmachine: (functional-004462) Calling .GetState
I1127 23:44:36.685057   20685 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:36.685096   20685 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:36.699476   20685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
I1127 23:44:36.699982   20685 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:36.700496   20685 main.go:141] libmachine: Using API Version  1
I1127 23:44:36.700522   20685 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:36.700886   20685 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:36.701097   20685 main.go:141] libmachine: (functional-004462) Calling .DriverName
I1127 23:44:36.701289   20685 ssh_runner.go:195] Run: systemctl --version
I1127 23:44:36.701319   20685 main.go:141] libmachine: (functional-004462) Calling .GetSSHHostname
I1127 23:44:36.704693   20685 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:36.705088   20685 main.go:141] libmachine: (functional-004462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:e3:ab", ip: ""} in network mk-functional-004462: {Iface:virbr1 ExpiryTime:2023-11-28 00:37:21 +0000 UTC Type:0 Mac:52:54:00:82:e3:ab Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-004462 Clientid:01:52:54:00:82:e3:ab}
I1127 23:44:36.705160   20685 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined IP address 192.168.50.120 and MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:36.705515   20685 main.go:141] libmachine: (functional-004462) Calling .GetSSHPort
I1127 23:44:36.705664   20685 main.go:141] libmachine: (functional-004462) Calling .GetSSHKeyPath
I1127 23:44:36.705858   20685 main.go:141] libmachine: (functional-004462) Calling .GetSSHUsername
I1127 23:44:36.706064   20685 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/functional-004462/id_rsa Username:docker}
I1127 23:44:36.805618   20685 ssh_runner.go:195] Run: sudo crictl images --output json
I1127 23:44:36.857001   20685 main.go:141] libmachine: Making call to close driver server
I1127 23:44:36.857016   20685 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:36.857329   20685 main.go:141] libmachine: (functional-004462) DBG | Closing plugin on server side
I1127 23:44:36.857329   20685 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:36.857372   20685 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 23:44:36.857388   20685 main.go:141] libmachine: Making call to close driver server
I1127 23:44:36.857400   20685 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:36.857679   20685 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:36.857692   20685 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 ssh pgrep buildkitd: exit status 1 (206.876442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image build -t localhost/my-image:functional-004462 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 image build -t localhost/my-image:functional-004462 testdata/build --alsologtostderr: (6.723258475s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-004462 image build -t localhost/my-image:functional-004462 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 642647dee61
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-004462
--> 36eae0e0e06
Successfully tagged localhost/my-image:functional-004462
36eae0e0e06a2511faa1b38fcf801ea450e11b6f5137b72c9018346b39f14d83
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-004462 image build -t localhost/my-image:functional-004462 testdata/build --alsologtostderr:
I1127 23:44:37.134529   20804 out.go:296] Setting OutFile to fd 1 ...
I1127 23:44:37.134727   20804 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:37.134739   20804 out.go:309] Setting ErrFile to fd 2...
I1127 23:44:37.134743   20804 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:44:37.134953   20804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
I1127 23:44:37.135515   20804 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:37.135960   20804 config.go:182] Loaded profile config "functional-004462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1127 23:44:37.136337   20804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:37.136373   20804 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:37.151716   20804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
I1127 23:44:37.152143   20804 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:37.152690   20804 main.go:141] libmachine: Using API Version  1
I1127 23:44:37.152712   20804 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:37.153128   20804 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:37.153308   20804 main.go:141] libmachine: (functional-004462) Calling .GetState
I1127 23:44:37.155210   20804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1127 23:44:37.155262   20804 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 23:44:37.168860   20804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
I1127 23:44:37.169215   20804 main.go:141] libmachine: () Calling .GetVersion
I1127 23:44:37.169708   20804 main.go:141] libmachine: Using API Version  1
I1127 23:44:37.169738   20804 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 23:44:37.170040   20804 main.go:141] libmachine: () Calling .GetMachineName
I1127 23:44:37.170228   20804 main.go:141] libmachine: (functional-004462) Calling .DriverName
I1127 23:44:37.170417   20804 ssh_runner.go:195] Run: systemctl --version
I1127 23:44:37.170446   20804 main.go:141] libmachine: (functional-004462) Calling .GetSSHHostname
I1127 23:44:37.172919   20804 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:37.173305   20804 main.go:141] libmachine: (functional-004462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:e3:ab", ip: ""} in network mk-functional-004462: {Iface:virbr1 ExpiryTime:2023-11-28 00:37:21 +0000 UTC Type:0 Mac:52:54:00:82:e3:ab Iaid: IPaddr:192.168.50.120 Prefix:24 Hostname:functional-004462 Clientid:01:52:54:00:82:e3:ab}
I1127 23:44:37.173332   20804 main.go:141] libmachine: (functional-004462) DBG | domain functional-004462 has defined IP address 192.168.50.120 and MAC address 52:54:00:82:e3:ab in network mk-functional-004462
I1127 23:44:37.173450   20804 main.go:141] libmachine: (functional-004462) Calling .GetSSHPort
I1127 23:44:37.173597   20804 main.go:141] libmachine: (functional-004462) Calling .GetSSHKeyPath
I1127 23:44:37.173743   20804 main.go:141] libmachine: (functional-004462) Calling .GetSSHUsername
I1127 23:44:37.173846   20804 sshutil.go:53] new ssh client: &{IP:192.168.50.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/functional-004462/id_rsa Username:docker}
I1127 23:44:37.271702   20804 build_images.go:151] Building image from path: /tmp/build.4059860226.tar
I1127 23:44:37.271762   20804 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1127 23:44:37.284358   20804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4059860226.tar
I1127 23:44:37.290906   20804 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4059860226.tar: stat -c "%s %y" /var/lib/minikube/build/build.4059860226.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4059860226.tar': No such file or directory
I1127 23:44:37.290938   20804 ssh_runner.go:362] scp /tmp/build.4059860226.tar --> /var/lib/minikube/build/build.4059860226.tar (3072 bytes)
I1127 23:44:37.330413   20804 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4059860226
I1127 23:44:37.351264   20804 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4059860226 -xf /var/lib/minikube/build/build.4059860226.tar
I1127 23:44:37.361809   20804 crio.go:297] Building image: /var/lib/minikube/build/build.4059860226
I1127 23:44:37.361873   20804 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-004462 /var/lib/minikube/build/build.4059860226 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1127 23:44:43.752398   20804 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-004462 /var/lib/minikube/build/build.4059860226 --cgroup-manager=cgroupfs: (6.390498217s)
I1127 23:44:43.752487   20804 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4059860226
I1127 23:44:43.776898   20804 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4059860226.tar
I1127 23:44:43.786535   20804 build_images.go:207] Built localhost/my-image:functional-004462 from /tmp/build.4059860226.tar
I1127 23:44:43.786568   20804 build_images.go:123] succeeded building to: functional-004462
I1127 23:44:43.786572   20804 build_images.go:124] failed building to: 
I1127 23:44:43.786610   20804 main.go:141] libmachine: Making call to close driver server
I1127 23:44:43.786631   20804 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:43.786952   20804 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:43.786968   20804 main.go:141] libmachine: (functional-004462) DBG | Closing plugin on server side
I1127 23:44:43.786970   20804 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 23:44:43.786982   20804 main.go:141] libmachine: Making call to close driver server
I1127 23:44:43.787003   20804 main.go:141] libmachine: (functional-004462) Calling .Close
I1127 23:44:43.787281   20804 main.go:141] libmachine: Successfully made call to close driver server
I1127 23:44:43.787297   20804 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 23:44:43.787312   20804 main.go:141] libmachine: (functional-004462) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls
2023/11/27 23:44:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.194088452s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-004462
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image load --daemon gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 image load --daemon gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr: (3.766400556s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image load --daemon gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 image load --daemon gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr: (2.423277654s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.145009371s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-004462
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image load --daemon gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 image load --daemon gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr: (4.396149016s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 service list -o json
functional_test.go:1493: Took "463.233027ms" to run "out/minikube-linux-amd64 -p functional-004462 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.120:30876
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.120:30876
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "247.156785ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "69.45049ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (27.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdany-port2379135374/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701128646529535177" to /tmp/TestFunctionalparallelMountCmdany-port2379135374/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701128646529535177" to /tmp/TestFunctionalparallelMountCmdany-port2379135374/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701128646529535177" to /tmp/TestFunctionalparallelMountCmdany-port2379135374/001/test-1701128646529535177
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (267.266506ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 27 23:44 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 27 23:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 27 23:44 test-1701128646529535177
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh cat /mount-9p/test-1701128646529535177
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-004462 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3ecb2040-a566-4f9c-b26a-43af1728ff81] Pending
helpers_test.go:344: "busybox-mount" [3ecb2040-a566-4f9c-b26a-43af1728ff81] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3ecb2040-a566-4f9c-b26a-43af1728ff81] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3ecb2040-a566-4f9c-b26a-43af1728ff81] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.034997595s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-004462 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdany-port2379135374/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (27.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "285.531375ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "57.847986ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image save gcr.io/google-containers/addon-resizer:functional-004462 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 image save gcr.io/google-containers/addon-resizer:functional-004462 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.393347049s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image rm gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.144292068s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-004462
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 image save --daemon gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-004462 image save --daemon gcr.io/google-containers/addon-resizer:functional-004462 --alsologtostderr: (1.892406896s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-004462
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdspecific-port229057151/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (223.008212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdspecific-port229057151/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 ssh "sudo umount -f /mount-9p": exit status 1 (279.721394ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-004462 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdspecific-port229057151/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3200424757/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3200424757/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3200424757/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T" /mount1: exit status 1 (258.150917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-004462 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-004462 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3200424757/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3200424757/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-004462 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3200424757/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-004462
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-004462
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-004462
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (110.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-142525 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1127 23:45:27.680580   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1127 23:45:55.365167   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-142525 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m50.841547296s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (110.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-142525 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-142525 addons enable ingress --alsologtostderr -v=5: (16.939152257s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-142525 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-416755 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1127 23:50:12.911342   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:50:27.681666   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-416755 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.554523208s)
--- PASS: TestJSONOutput/start/Command (61.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-416755 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-416755 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-416755 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-416755 --output=json --user=testUser: (7.119962645s)
--- PASS: TestJSONOutput/stop/Command (7.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-875949 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-875949 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.54154ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"391ae24f-b6a2-490f-87e0-af68a66b3746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-875949] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f0f2c93-b7fc-4f13-96c1-0f17a2f2d106","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17206"}}
	{"specversion":"1.0","id":"3bf77e0a-15e5-4ee6-b902-e2cec983d0a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"22a8beee-3b58-4772-9125-f565bbe2d961","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig"}}
	{"specversion":"1.0","id":"bf1f9e47-e8f7-49b4-be97-a5d89f7e6760","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube"}}
	{"specversion":"1.0","id":"9bba39dc-b39b-4014-baa5-5ecaad601190","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9ff11e54-2352-4d0e-bb83-44e42784d17b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3172ed69-3604-4642-a74f-30197d052ab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-875949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-875949
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (93.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-113609 --driver=kvm2  --container-runtime=crio
E1127 23:51:34.832118   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-113609 --driver=kvm2  --container-runtime=crio: (44.114208066s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-116335 --driver=kvm2  --container-runtime=crio
E1127 23:51:55.432436   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:55.437734   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:55.447997   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:55.468231   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:55.508465   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:55.588810   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:55.749252   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:56.069850   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:56.710810   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:51:57.991240   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:52:00.552378   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:52:05.673102   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:52:15.913328   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:52:36.393861   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-116335 --driver=kvm2  --container-runtime=crio: (46.729914812s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-113609
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-116335
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-116335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-116335
helpers_test.go:175: Cleaning up "first-113609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-113609
--- PASS: TestMinikubeProfile (93.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-266908 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-266908 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.960877647s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-266908 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-266908 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-279495 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1127 23:53:17.355283   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-279495 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.498356453s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-279495 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-279495 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-266908 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-279495 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-279495 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-279495
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-279495: (1.184607009s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.19s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-279495
E1127 23:53:50.987947   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-279495: (23.194765264s)
--- PASS: TestMountStart/serial/RestartStopped (24.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-279495 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-279495 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-883509 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1127 23:54:18.673260   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1127 23:54:39.275532   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1127 23:55:27.680263   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-883509 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.284085508s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-883509 -- rollout status deployment/busybox: (3.684495305s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-9qz8x -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-lgwvm -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-9qz8x -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-lgwvm -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-9qz8x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-883509 -- exec busybox-5bc68d56bd-lgwvm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-883509 -v 3 --alsologtostderr
E1127 23:56:50.725984   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-883509 -v 3 --alsologtostderr: (45.130514537s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.73s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp testdata/cp-test.txt multinode-883509:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2595831742/001/cp-test_multinode-883509.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509:/home/docker/cp-test.txt multinode-883509-m02:/home/docker/cp-test_multinode-883509_multinode-883509-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m02 "sudo cat /home/docker/cp-test_multinode-883509_multinode-883509-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509:/home/docker/cp-test.txt multinode-883509-m03:/home/docker/cp-test_multinode-883509_multinode-883509-m03.txt
E1127 23:56:55.432875   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m03 "sudo cat /home/docker/cp-test_multinode-883509_multinode-883509-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp testdata/cp-test.txt multinode-883509-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2595831742/001/cp-test_multinode-883509-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509-m02:/home/docker/cp-test.txt multinode-883509:/home/docker/cp-test_multinode-883509-m02_multinode-883509.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509 "sudo cat /home/docker/cp-test_multinode-883509-m02_multinode-883509.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509-m02:/home/docker/cp-test.txt multinode-883509-m03:/home/docker/cp-test_multinode-883509-m02_multinode-883509-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m03 "sudo cat /home/docker/cp-test_multinode-883509-m02_multinode-883509-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp testdata/cp-test.txt multinode-883509-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2595831742/001/cp-test_multinode-883509-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509-m03:/home/docker/cp-test.txt multinode-883509:/home/docker/cp-test_multinode-883509-m03_multinode-883509.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509 "sudo cat /home/docker/cp-test_multinode-883509-m03_multinode-883509.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 cp multinode-883509-m03:/home/docker/cp-test.txt multinode-883509-m02:/home/docker/cp-test_multinode-883509-m03_multinode-883509-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 ssh -n multinode-883509-m02 "sudo cat /home/docker/cp-test_multinode-883509-m03_multinode-883509-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-883509 node stop m03: (1.390651375s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-883509 status: exit status 7 (441.791928ms)

                                                
                                                
-- stdout --
	multinode-883509
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-883509-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-883509-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-883509 status --alsologtostderr: exit status 7 (440.992809ms)

                                                
                                                
-- stdout --
	multinode-883509
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-883509-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-883509-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:57:02.689154   27732 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:57:02.689279   27732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:57:02.689287   27732 out.go:309] Setting ErrFile to fd 2...
	I1127 23:57:02.689292   27732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:57:02.689467   27732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1127 23:57:02.689626   27732 out.go:303] Setting JSON to false
	I1127 23:57:02.689652   27732 mustload.go:65] Loading cluster: multinode-883509
	I1127 23:57:02.689739   27732 notify.go:220] Checking for updates...
	I1127 23:57:02.690020   27732 config.go:182] Loaded profile config "multinode-883509": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1127 23:57:02.690033   27732 status.go:255] checking status of multinode-883509 ...
	I1127 23:57:02.690388   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:57:02.690441   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:57:02.705660   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I1127 23:57:02.706031   27732 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:57:02.706553   27732 main.go:141] libmachine: Using API Version  1
	I1127 23:57:02.706574   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:57:02.706930   27732 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:57:02.707096   27732 main.go:141] libmachine: (multinode-883509) Calling .GetState
	I1127 23:57:02.708677   27732 status.go:330] multinode-883509 host status = "Running" (err=<nil>)
	I1127 23:57:02.708694   27732 host.go:66] Checking if "multinode-883509" exists ...
	I1127 23:57:02.709018   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:57:02.709051   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:57:02.723414   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I1127 23:57:02.723815   27732 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:57:02.724228   27732 main.go:141] libmachine: Using API Version  1
	I1127 23:57:02.724260   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:57:02.724583   27732 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:57:02.724783   27732 main.go:141] libmachine: (multinode-883509) Calling .GetIP
	I1127 23:57:02.727611   27732 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:57:02.728072   27732 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:57:02.728101   27732 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:57:02.728241   27732 host.go:66] Checking if "multinode-883509" exists ...
	I1127 23:57:02.728641   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:57:02.728681   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:57:02.742463   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I1127 23:57:02.742834   27732 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:57:02.743233   27732 main.go:141] libmachine: Using API Version  1
	I1127 23:57:02.743251   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:57:02.743534   27732 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:57:02.743703   27732 main.go:141] libmachine: (multinode-883509) Calling .DriverName
	I1127 23:57:02.743875   27732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:57:02.743908   27732 main.go:141] libmachine: (multinode-883509) Calling .GetSSHHostname
	I1127 23:57:02.746563   27732 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:57:02.747012   27732 main.go:141] libmachine: (multinode-883509) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:08:02", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:54:24 +0000 UTC Type:0 Mac:52:54:00:e1:08:02 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-883509 Clientid:01:52:54:00:e1:08:02}
	I1127 23:57:02.747049   27732 main.go:141] libmachine: (multinode-883509) DBG | domain multinode-883509 has defined IP address 192.168.39.159 and MAC address 52:54:00:e1:08:02 in network mk-multinode-883509
	I1127 23:57:02.747156   27732 main.go:141] libmachine: (multinode-883509) Calling .GetSSHPort
	I1127 23:57:02.747308   27732 main.go:141] libmachine: (multinode-883509) Calling .GetSSHKeyPath
	I1127 23:57:02.747427   27732 main.go:141] libmachine: (multinode-883509) Calling .GetSSHUsername
	I1127 23:57:02.747567   27732 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509/id_rsa Username:docker}
	I1127 23:57:02.844522   27732 ssh_runner.go:195] Run: systemctl --version
	I1127 23:57:02.850513   27732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:57:02.864634   27732 kubeconfig.go:92] found "multinode-883509" server: "https://192.168.39.159:8443"
	I1127 23:57:02.864659   27732 api_server.go:166] Checking apiserver status ...
	I1127 23:57:02.864701   27732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:57:02.876775   27732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1095/cgroup
	I1127 23:57:02.886219   27732 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod3b5e7b5fdb84862f46e6248e54c84795/crio-e20c635ccf67e663f0c39e76c571c9333f1e9b985a5dba6a137bc8e3af2bfd8d"
	I1127 23:57:02.886291   27732 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod3b5e7b5fdb84862f46e6248e54c84795/crio-e20c635ccf67e663f0c39e76c571c9333f1e9b985a5dba6a137bc8e3af2bfd8d/freezer.state
	I1127 23:57:02.896412   27732 api_server.go:204] freezer state: "THAWED"
	I1127 23:57:02.896432   27732 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1127 23:57:02.904278   27732 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I1127 23:57:02.904307   27732 status.go:421] multinode-883509 apiserver status = Running (err=<nil>)
	I1127 23:57:02.904317   27732 status.go:257] multinode-883509 status: &{Name:multinode-883509 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:57:02.904332   27732 status.go:255] checking status of multinode-883509-m02 ...
	I1127 23:57:02.904693   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:57:02.904736   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:57:02.918790   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I1127 23:57:02.919216   27732 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:57:02.919657   27732 main.go:141] libmachine: Using API Version  1
	I1127 23:57:02.919679   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:57:02.919971   27732 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:57:02.920129   27732 main.go:141] libmachine: (multinode-883509-m02) Calling .GetState
	I1127 23:57:02.921792   27732 status.go:330] multinode-883509-m02 host status = "Running" (err=<nil>)
	I1127 23:57:02.921815   27732 host.go:66] Checking if "multinode-883509-m02" exists ...
	I1127 23:57:02.922073   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:57:02.922109   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:57:02.936232   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I1127 23:57:02.936571   27732 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:57:02.937016   27732 main.go:141] libmachine: Using API Version  1
	I1127 23:57:02.937040   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:57:02.937340   27732 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:57:02.937500   27732 main.go:141] libmachine: (multinode-883509-m02) Calling .GetIP
	I1127 23:57:02.939801   27732 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:57:02.940161   27732 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:57:02.940193   27732 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:57:02.940282   27732 host.go:66] Checking if "multinode-883509-m02" exists ...
	I1127 23:57:02.940571   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:57:02.940603   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:57:02.954259   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I1127 23:57:02.954580   27732 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:57:02.954971   27732 main.go:141] libmachine: Using API Version  1
	I1127 23:57:02.954990   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:57:02.955267   27732 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:57:02.955436   27732 main.go:141] libmachine: (multinode-883509-m02) Calling .DriverName
	I1127 23:57:02.955605   27732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:57:02.955624   27732 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHHostname
	I1127 23:57:02.958068   27732 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:57:02.958441   27732 main.go:141] libmachine: (multinode-883509-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:23:98", ip: ""} in network mk-multinode-883509: {Iface:virbr1 ExpiryTime:2023-11-28 00:55:31 +0000 UTC Type:0 Mac:52:54:00:10:23:98 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:multinode-883509-m02 Clientid:01:52:54:00:10:23:98}
	I1127 23:57:02.958472   27732 main.go:141] libmachine: (multinode-883509-m02) DBG | domain multinode-883509-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:10:23:98 in network mk-multinode-883509
	I1127 23:57:02.958559   27732 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHPort
	I1127 23:57:02.958692   27732 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHKeyPath
	I1127 23:57:02.958814   27732 main.go:141] libmachine: (multinode-883509-m02) Calling .GetSSHUsername
	I1127 23:57:02.958937   27732 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17206-4749/.minikube/machines/multinode-883509-m02/id_rsa Username:docker}
	I1127 23:57:03.043922   27732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:57:03.057854   27732 status.go:257] multinode-883509-m02 status: &{Name:multinode-883509-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:57:03.057892   27732 status.go:255] checking status of multinode-883509-m03 ...
	I1127 23:57:03.058220   27732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1127 23:57:03.058254   27732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 23:57:03.073348   27732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44571
	I1127 23:57:03.073729   27732 main.go:141] libmachine: () Calling .GetVersion
	I1127 23:57:03.074167   27732 main.go:141] libmachine: Using API Version  1
	I1127 23:57:03.074187   27732 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 23:57:03.074493   27732 main.go:141] libmachine: () Calling .GetMachineName
	I1127 23:57:03.074693   27732 main.go:141] libmachine: (multinode-883509-m03) Calling .GetState
	I1127 23:57:03.076073   27732 status.go:330] multinode-883509-m03 host status = "Stopped" (err=<nil>)
	I1127 23:57:03.076086   27732 status.go:343] host is not running, skipping remaining checks
	I1127 23:57:03.076092   27732 status.go:257] multinode-883509-m03 status: &{Name:multinode-883509-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 node start m03 --alsologtostderr
E1127 23:57:23.116479   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-883509 node start m03 --alsologtostderr: (30.046528499s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-883509 node delete m03: (1.231704084s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (445.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-883509 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1128 00:13:30.727134   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 00:13:50.987995   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 00:15:27.681671   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 00:16:55.433442   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:18:50.987645   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-883509 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m24.865260533s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-883509 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (445.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-883509
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-883509-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-883509-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.02199ms)

                                                
                                                
-- stdout --
	* [multinode-883509-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-883509-m02' is duplicated with machine name 'multinode-883509-m02' in profile 'multinode-883509'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-883509-m03 --driver=kvm2  --container-runtime=crio
E1128 00:20:27.680978   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-883509-m03 --driver=kvm2  --container-runtime=crio: (47.655820641s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-883509
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-883509: exit status 80 (227.433617ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-883509
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-883509-m03 already exists in multinode-883509-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-883509-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.99s)

                                                
                                    
x
+
TestScheduledStopUnix (116.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-658772 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-658772 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.030319794s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-658772 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-658772 -n scheduled-stop-658772
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-658772 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-658772 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-658772 -n scheduled-stop-658772
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-658772
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-658772 --schedule 15s
E1128 00:26:55.432456   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-658772
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-658772: exit status 7 (74.469615ms)

                                                
                                                
-- stdout --
	scheduled-stop-658772
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-658772 -n scheduled-stop-658772
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-658772 -n scheduled-stop-658772: exit status 7 (74.668378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-658772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-658772
--- PASS: TestScheduledStopUnix (116.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (192.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-194564 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-194564 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.564306871s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-194564
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-194564: (6.14526871s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-194564 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-194564 status --format={{.Host}}: exit status 7 (86.716332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-194564 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-194564 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.789015386s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-194564 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-194564 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-194564 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (102.492498ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-194564] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-194564
	    minikube start -p kubernetes-upgrade-194564 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1945642 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-194564 --kubernetes-version=v1.29.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-194564 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-194564 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.757957755s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-194564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-194564
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-194564: (1.01620393s)
--- PASS: TestKubernetesUpgrade (192.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-165445 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-165445 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (93.277226ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-165445] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (105.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-165445 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-165445 --driver=kvm2  --container-runtime=crio: (1m45.140373083s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-165445 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (105.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (63.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-165445 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-165445 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m1.806531379s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-165445 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-165445 status -o json: exit status 2 (341.670592ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-165445","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-165445
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-165445: (1.265666957s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (63.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-165445 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-165445 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.883514207s)
--- PASS: TestNoKubernetes/serial/Start (28.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-167798 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-167798 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (129.372051ms)

                                                
                                                
-- stdout --
	* [false-167798] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:30:25.220411   38672 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:30:25.220700   38672 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:30:25.220711   38672 out.go:309] Setting ErrFile to fd 2...
	I1128 00:30:25.220716   38672 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:30:25.220932   38672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-4749/.minikube/bin
	I1128 00:30:25.221567   38672 out.go:303] Setting JSON to false
	I1128 00:30:25.222504   38672 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4372,"bootTime":1701127053,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 00:30:25.222562   38672 start.go:138] virtualization: kvm guest
	I1128 00:30:25.224444   38672 out.go:177] * [false-167798] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 00:30:25.226109   38672 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:30:25.226171   38672 notify.go:220] Checking for updates...
	I1128 00:30:25.227684   38672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:30:25.229298   38672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-4749/kubeconfig
	I1128 00:30:25.230749   38672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-4749/.minikube
	I1128 00:30:25.232199   38672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 00:30:25.233694   38672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:30:25.235768   38672 config.go:182] Loaded profile config "NoKubernetes-165445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1128 00:30:25.235925   38672 config.go:182] Loaded profile config "force-systemd-env-438559": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 00:30:25.236046   38672 config.go:182] Loaded profile config "kubernetes-upgrade-194564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 00:30:25.236165   38672 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:30:25.274218   38672 out.go:177] * Using the kvm2 driver based on user configuration
	I1128 00:30:25.275618   38672 start.go:298] selected driver: kvm2
	I1128 00:30:25.275637   38672 start.go:902] validating driver "kvm2" against <nil>
	I1128 00:30:25.275657   38672 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:30:25.278197   38672 out.go:177] 
	W1128 00:30:25.279631   38672 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1128 00:30:25.281032   38672 out.go:177] 

                                                
                                                
** /stderr **
E1128 00:30:27.680651   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-167798 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-167798" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 Nov 2023 00:30:26 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.91:8443
name: force-systemd-env-438559
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 Nov 2023 00:29:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.171:8443
name: kubernetes-upgrade-194564
contexts:
- context:
cluster: force-systemd-env-438559
extensions:
- extension:
last-update: Tue, 28 Nov 2023 00:30:26 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-env-438559
name: force-systemd-env-438559
- context:
cluster: kubernetes-upgrade-194564
extensions:
- extension:
last-update: Tue, 28 Nov 2023 00:29:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-194564
name: kubernetes-upgrade-194564
current-context: force-systemd-env-438559
kind: Config
preferences: {}
users:
- name: force-systemd-env-438559
user:
client-certificate: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/force-systemd-env-438559/client.crt
client-key: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/force-systemd-env-438559/client.key
- name: kubernetes-upgrade-194564
user:
client-certificate: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kubernetes-upgrade-194564/client.crt
client-key: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kubernetes-upgrade-194564/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-167798

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-167798"

                                                
                                                
----------------------- debugLogs end: false-167798 [took: 4.194011617s] --------------------------------
helpers_test.go:175: Cleaning up "false-167798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-167798
--- PASS: TestNetworkPlugins/group/false (4.49s)

                                                
                                    
x
+
TestPause/serial/Start (112.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-896833 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-896833 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m52.91665059s)
--- PASS: TestPause/serial/Start (112.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-165445 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-165445 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.292273ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-165445
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-165445: (1.17347851s)
--- PASS: TestNoKubernetes/serial/Stop (1.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-896833 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-896833 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.369904937s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.32s)

                                                
                                    
x
+
TestPause/serial/Pause (1.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-896833 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-896833 --alsologtostderr -v=5: (1.502972172s)
--- PASS: TestPause/serial/Pause (1.50s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-896833 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-896833 --output=json --layout=cluster: exit status 2 (291.011489ms)

                                                
                                                
-- stdout --
	{"Name":"pause-896833","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-896833","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-896833 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.4s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-896833 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-896833 --alsologtostderr -v=5: (1.397154664s)
--- PASS: TestPause/serial/PauseAgain (1.40s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-896833 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-896833 --alsologtostderr -v=5: (1.052418367s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (5.870735446s)
--- PASS: TestPause/serial/VerifyDeletedResources (5.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-732472 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-732472 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m12.365305453s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (208.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-473615 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 00:33:50.988538   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-473615 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (3m28.569342057s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (208.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-304541 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 00:35:27.680861   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-304541 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m2.371886054s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-732472 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e3dfeca-6d35-4429-84c9-0ad534948a63] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e3dfeca-6d35-4429-84c9-0ad534948a63] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.03854633s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-732472 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-732472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-732472 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-304541 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a9066221-2fd6-4445-8204-05af4bc2d1f3] Pending
helpers_test.go:344: "busybox" [a9066221-2fd6-4445-8204-05af4bc2d1f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a9066221-2fd6-4445-8204-05af4bc2d1f3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.028777859s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-304541 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-304541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-304541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.070885716s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-304541 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-789586
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-473615 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [696e346b-59fd-4c89-9de6-9a4dbac957c4] Pending
helpers_test.go:344: "busybox" [696e346b-59fd-4c89-9de6-9a4dbac957c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [696e346b-59fd-4c89-9de6-9a4dbac957c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.032130332s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-473615 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-488423 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-488423 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m1.186358172s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-473615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-473615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041991338s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-473615 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-488423 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [95d5410e-5ec3-42c3-a64c-9d6034cc2479] Pending
helpers_test.go:344: "busybox" [95d5410e-5ec3-42c3-a64c-9d6034cc2479] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [95d5410e-5ec3-42c3-a64c-9d6034cc2479] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.02239892s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-488423 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-488423 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-488423 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.042489147s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-488423 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (779.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-732472 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1128 00:38:34.035147   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-732472 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (12m59.688212775s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-732472 -n old-k8s-version-732472
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (779.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (570.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-304541 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-304541 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m30.400673678s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-304541 -n embed-certs-304541
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (570.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (576.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-473615 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-473615 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (9m36.683627267s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-473615 -n no-preload-473615
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (576.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (489.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-488423 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 00:41:38.478784   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:41:55.432995   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 00:43:50.988182   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 00:45:27.680393   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 00:46:50.728432   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 00:46:55.432502   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-488423 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (8m8.777387559s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-488423 -n default-k8s-diff-port-488423
E1128 00:48:50.987944   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (489.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-517109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-517109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (1m4.662172702s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (126.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1128 01:03:30.729142   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 01:03:50.988193   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m6.202269956s)
--- PASS: TestNetworkPlugins/group/auto/Start (126.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-517109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-517109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.239318356s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1128 01:05:27.680778   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 01:05:32.967789   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:32.973087   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:32.983362   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:33.003708   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:33.044014   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:33.124352   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:33.284868   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:33.605531   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:34.246477   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:05:35.527358   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m35.250790268s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-167798 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-167798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9c72w" [a5753a3a-ad37-4012-a29e-5102fa40270e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 01:05:38.088101   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-9c72w" [a5753a3a-ad37-4012-a29e-5102fa40270e] Running
E1128 01:05:43.208939   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.017477942s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-167798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1128 01:06:13.930818   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m30.894217313s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1128 01:06:58.020796   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m11.301324735s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rbstx" [c8b8f237-d0c2-4713-81b1-0ed89caf03ff] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.030218059s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (431.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-517109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-517109 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (7m11.002798651s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-517109 -n newest-cni-517109
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (431.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-167798 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-167798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kjxdj" [c2366387-3e2a-4702-b9dd-7a6f01f9e697] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 01:07:05.701509   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-kjxdj" [c2366387-3e2a-4702-b9dd-7a6f01f9e697] Running
E1128 01:07:15.942270   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.012126829s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-167798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-167798 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-167798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c24xq" [9ab75c1d-581c-46d9-b866-0e5b4a5a6977] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c24xq" [9ab75c1d-581c-46d9-b866-0e5b4a5a6977] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.011323722s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (351.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (5m51.790613769s)
--- PASS: TestNetworkPlugins/group/flannel/Start (351.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-167798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (327.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1128 01:08:08.056063   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (5m27.041489132s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (327.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9p726" [83b68703-74cb-41db-a52c-ff005720eb93] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021039019s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-167798 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-167798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pzdn4" [69f1e252-2baa-4bb4-be4e-f05b4a4c28c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 01:08:16.811772   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:08:17.383924   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:08:18.296530   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-pzdn4" [69f1e252-2baa-4bb4-be4e-f05b4a4c28c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.0103433s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-167798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (357.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1128 01:08:50.988650   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 01:09:19.737484   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.crt: no such file or directory
E1128 01:09:39.305127   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:10:27.680397   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/addons-052905/client.crt: no such file or directory
E1128 01:10:32.968159   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:10:36.251896   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:36.257151   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:36.267413   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:36.287637   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:36.327901   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:36.408241   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:36.568609   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:36.889247   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:37.529949   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:38.810211   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:41.370450   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:41.657775   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.crt: no such file or directory
E1128 01:10:46.491204   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:10:56.732217   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:11:00.652843   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/old-k8s-version-732472/client.crt: no such file or directory
E1128 01:11:17.212742   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:11:54.036105   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/functional-004462/client.crt: no such file or directory
E1128 01:11:55.432582   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/ingress-addon-legacy-142525/client.crt: no such file or directory
E1128 01:11:55.461853   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:11:58.173590   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:11:59.666684   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:11:59.671969   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:11:59.682209   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:11:59.702555   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:11:59.742921   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:11:59.823316   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:11:59.983804   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:12:00.304425   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:12:00.945549   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:12:02.225875   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:12:04.786973   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:12:09.907369   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:12:20.148271   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:12:23.146156   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/no-preload-473615/client.crt: no such file or directory
E1128 01:12:36.819993   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:36.825299   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:36.835593   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:36.855891   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:36.896266   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:36.976653   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:37.137098   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:37.457684   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:38.097944   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:39.378794   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:40.628523   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:12:41.939118   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:47.059624   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:57.299861   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:12:57.815151   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.crt: no such file or directory
E1128 01:13:09.317986   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:09.323304   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:09.333620   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:09.353939   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:09.394266   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:09.474590   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:09.634979   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:09.955694   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:10.596169   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:11.876961   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:14.437437   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:17.780841   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/custom-flannel-167798/client.crt: no such file or directory
E1128 01:13:19.557932   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
E1128 01:13:20.094154   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/auto-167798/client.crt: no such file or directory
E1128 01:13:21.589096   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
E1128 01:13:25.498269   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/default-k8s-diff-port-488423/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-167798 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (5m57.44151985s)
--- PASS: TestNetworkPlugins/group/bridge/Start (357.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-g5qlm" [d484b16c-c252-4683-80ae-f3898e1c4b93] Running
E1128 01:13:29.799157   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.055192868s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-167798 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-167798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sdmsw" [a04cb5b8-8b15-468f-84fb-1687a1495007] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sdmsw" [a04cb5b8-8b15-468f-84fb-1687a1495007] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.016471013s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-167798 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-167798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r9xjs" [204ac699-69dd-4136-a11c-4bf172947e08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r9xjs" [204ac699-69dd-4136-a11c-4bf172947e08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.013774013s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-167798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-167798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-517109 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-517109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-517109 -n newest-cni-517109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-517109 -n newest-cni-517109: exit status 2 (267.705601ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-517109 -n newest-cni-517109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-517109 -n newest-cni-517109: exit status 2 (263.128072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-517109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-517109 -n newest-cni-517109
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-517109 -n newest-cni-517109
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.57s)
E1128 01:14:31.240476   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kindnet-167798/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-167798 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-167798 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8x2v6" [35d5f5f8-76ee-4258-9b62-d7dd2d42be9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 01:14:43.509478   11930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/calico-167798/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-8x2v6" [35d5f5f8-76ee-4258-9b62-d7dd2d42be9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010617183s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-167798 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-167798 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (39/303)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.0/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.0/binaries 0
21 TestDownloadOnly/v1.29.0-rc.0/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
225 TestChangeNoneUser 0
228 TestScheduledStopWindows 0
230 TestSkaffold 0
232 TestInsufficientStorage 0
236 TestMissingContainerUpgrade 0
245 TestStartStop/group/disable-driver-mounts 0.16
251 TestNetworkPlugins/group/kubenet 3.98
259 TestNetworkPlugins/group/cilium 4.27
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-001086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-001086
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-167798 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-167798" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 Nov 2023 00:29:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.171:8443
name: kubernetes-upgrade-194564
contexts:
- context:
cluster: kubernetes-upgrade-194564
extensions:
- extension:
last-update: Tue, 28 Nov 2023 00:29:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-194564
name: kubernetes-upgrade-194564
current-context: kubernetes-upgrade-194564
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-194564
user:
client-certificate: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kubernetes-upgrade-194564/client.crt
client-key: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kubernetes-upgrade-194564/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-167798

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-167798"

                                                
                                                
----------------------- debugLogs end: kubenet-167798 [took: 3.811872771s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-167798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-167798
--- SKIP: TestNetworkPlugins/group/kubenet (3.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-167798 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-167798" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17206-4749/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 Nov 2023 00:29:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.171:8443
name: kubernetes-upgrade-194564
contexts:
- context:
cluster: kubernetes-upgrade-194564
extensions:
- extension:
last-update: Tue, 28 Nov 2023 00:29:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-194564
name: kubernetes-upgrade-194564
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-194564
user:
client-certificate: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kubernetes-upgrade-194564/client.crt
client-key: /home/jenkins/minikube-integration/17206-4749/.minikube/profiles/kubernetes-upgrade-194564/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-167798

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-167798" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-167798"

                                                
                                                
----------------------- debugLogs end: cilium-167798 [took: 4.075577127s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-167798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-167798
--- SKIP: TestNetworkPlugins/group/cilium (4.27s)

                                                
                                    
Copied to clipboard